00:00:00.001 Started by upstream project "autotest-per-patch" build number 124193 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.055 Fetching changes from the remote Git repository 00:00:00.058 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.081 Using shallow fetch with depth 1 00:00:00.081 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.081 > git --version # timeout=10 00:00:00.120 > git --version # 'git version 2.39.2' 00:00:00.120 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.163 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.163 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.816 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.826 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.836 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:02.836 > git config core.sparsecheckout # timeout=10 00:00:02.846 > git read-tree -mu HEAD # timeout=10 00:00:02.861 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:02.877 Commit message: "pool: fixes for VisualBuild class" 00:00:02.877 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:02.943 [Pipeline] Start of Pipeline 00:00:02.956 [Pipeline] library 00:00:02.958 Loading library shm_lib@master 00:00:02.958 Library shm_lib@master is cached. Copying from home. 00:00:02.974 [Pipeline] node 00:00:02.982 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.984 [Pipeline] { 00:00:02.996 [Pipeline] catchError 00:00:02.998 [Pipeline] { 00:00:03.007 [Pipeline] wrap 00:00:03.015 [Pipeline] { 00:00:03.022 [Pipeline] stage 00:00:03.023 [Pipeline] { (Prologue) 00:00:03.203 [Pipeline] sh 00:00:03.489 + logger -p user.info -t JENKINS-CI 00:00:03.507 [Pipeline] echo 00:00:03.508 Node: CYP12 00:00:03.515 [Pipeline] sh 00:00:03.816 [Pipeline] setCustomBuildProperty 00:00:03.824 [Pipeline] echo 00:00:03.825 Cleanup processes 00:00:03.828 [Pipeline] sh 00:00:04.110 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.110 490859 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.122 [Pipeline] sh 00:00:04.404 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.404 ++ grep -v 'sudo pgrep' 00:00:04.404 ++ awk '{print $1}' 00:00:04.404 + sudo kill -9 00:00:04.404 + true 00:00:04.418 [Pipeline] cleanWs 00:00:04.429 [WS-CLEANUP] Deleting project workspace... 00:00:04.429 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.436 [WS-CLEANUP] done 00:00:04.441 [Pipeline] setCustomBuildProperty 00:00:04.456 [Pipeline] sh 00:00:04.746 + sudo git config --global --replace-all safe.directory '*' 00:00:04.798 [Pipeline] nodesByLabel 00:00:04.799 Found a total of 2 nodes with the 'sorcerer' label 00:00:04.808 [Pipeline] httpRequest 00:00:04.814 HttpMethod: GET 00:00:04.814 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:04.817 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:04.826 Response Code: HTTP/1.1 200 OK 00:00:04.826 Success: Status code 200 is in the accepted range: 200,404 00:00:04.827 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.468 [Pipeline] sh 00:00:05.778 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:05.794 [Pipeline] httpRequest 00:00:05.800 HttpMethod: GET 00:00:05.801 URL: http://10.211.164.101/packages/spdk_bab0baf303e77821a8713284f6bda58985dc0f07.tar.gz 00:00:05.804 Sending request to url: http://10.211.164.101/packages/spdk_bab0baf303e77821a8713284f6bda58985dc0f07.tar.gz 00:00:05.824 Response Code: HTTP/1.1 200 OK 00:00:05.824 Success: Status code 200 is in the accepted range: 200,404 00:00:05.825 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_bab0baf303e77821a8713284f6bda58985dc0f07.tar.gz 00:00:51.602 [Pipeline] sh 00:00:51.890 + tar --no-same-owner -xf spdk_bab0baf303e77821a8713284f6bda58985dc0f07.tar.gz 00:00:55.205 [Pipeline] sh 00:00:55.525 + git -C spdk log --oneline -n5 00:00:55.526 bab0baf30 pkgdep/git: Bump bpftrace 00:00:55.526 f5181c930 pkgdep/git: Bump ICE driver to the latest release 00:00:55.526 fc877cdd3 pkgdep/git: Bump IRDMA driver to the latest release 00:00:55.526 2a3be8dde nvmf: reference qpair through a variable 00:00:55.526 34e056f53 check_so_deps: remove unnecessary suppress entries 00:00:55.539 [Pipeline] } 00:00:55.557 [Pipeline] // stage 00:00:55.567 [Pipeline] stage 00:00:55.569 [Pipeline] { (Prepare) 00:00:55.588 [Pipeline] writeFile 00:00:55.607 [Pipeline] sh 00:00:55.891 + logger -p user.info -t JENKINS-CI 00:00:55.904 [Pipeline] sh 00:00:56.189 + logger -p user.info -t JENKINS-CI 00:00:56.204 [Pipeline] sh 00:00:56.490 + cat autorun-spdk.conf 00:00:56.491 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.491 SPDK_TEST_NVMF=1 00:00:56.491 SPDK_TEST_NVME_CLI=1 00:00:56.491 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.491 SPDK_TEST_NVMF_NICS=e810 00:00:56.491 SPDK_TEST_VFIOUSER=1 00:00:56.491 SPDK_RUN_UBSAN=1 00:00:56.491 NET_TYPE=phy 00:00:56.499 RUN_NIGHTLY=0 00:00:56.503 [Pipeline] readFile 00:00:56.530 [Pipeline] withEnv 00:00:56.532 [Pipeline] { 00:00:56.547 [Pipeline] sh 00:00:56.834 + set -ex 00:00:56.834 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:56.834 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:56.834 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.834 ++ SPDK_TEST_NVMF=1 00:00:56.834 ++ SPDK_TEST_NVME_CLI=1 00:00:56.834 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:56.834 ++ SPDK_TEST_NVMF_NICS=e810 00:00:56.834 ++ SPDK_TEST_VFIOUSER=1 00:00:56.834 ++ SPDK_RUN_UBSAN=1 00:00:56.834 ++ NET_TYPE=phy 00:00:56.834 ++ RUN_NIGHTLY=0 00:00:56.834 + case $SPDK_TEST_NVMF_NICS in 00:00:56.834 + DRIVERS=ice 00:00:56.834 + [[ tcp == \r\d\m\a ]] 00:00:56.834 + [[ -n ice ]] 00:00:56.834 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:56.834 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:56.834 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:56.834 rmmod: ERROR: Module irdma is not currently loaded 00:00:56.834 rmmod: ERROR: Module i40iw is not currently loaded 00:00:56.834 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:56.834 + true 00:00:56.834 + for D in $DRIVERS 00:00:56.834 + sudo modprobe ice 00:00:56.834 + exit 0 00:00:56.845 [Pipeline] } 00:00:56.863 [Pipeline] // withEnv 00:00:56.869 [Pipeline] } 00:00:56.886 [Pipeline] // stage 00:00:56.896 [Pipeline] catchError 00:00:56.898 [Pipeline] { 00:00:56.914 [Pipeline] timeout 00:00:56.914 Timeout set to expire in 50 min 00:00:56.916 [Pipeline] { 00:00:56.933 [Pipeline] stage 00:00:56.936 [Pipeline] { (Tests) 00:00:56.954 [Pipeline] sh 00:00:57.243 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.243 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.243 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.244 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:57.244 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.244 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:57.244 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:57.244 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:57.244 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:57.244 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:57.244 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:57.244 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:57.244 + source /etc/os-release 00:00:57.244 ++ NAME='Fedora Linux' 00:00:57.244 ++ VERSION='38 (Cloud Edition)' 00:00:57.244 ++ ID=fedora 00:00:57.244 ++ VERSION_ID=38 00:00:57.244 ++ VERSION_CODENAME= 00:00:57.244 ++ PLATFORM_ID=platform:f38 00:00:57.244 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:57.244 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:57.244 ++ LOGO=fedora-logo-icon 00:00:57.244 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:57.244 ++ HOME_URL=https://fedoraproject.org/ 00:00:57.244 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:57.244 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:57.244 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:57.244 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:57.244 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:57.244 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:57.244 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:57.244 ++ SUPPORT_END=2024-05-14 00:00:57.244 ++ VARIANT='Cloud Edition' 00:00:57.244 ++ VARIANT_ID=cloud 00:00:57.244 + uname -a 00:00:57.244 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:57.244 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:00.545 Hugepages 00:01:00.545 node hugesize free / total 00:01:00.545 node0 1048576kB 0 / 0 00:01:00.545 node0 2048kB 0 / 0 00:01:00.545 node1 1048576kB 0 / 0 00:01:00.545 node1 2048kB 0 / 0 00:01:00.545 00:01:00.545 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:00.545 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:00.545 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:00.545 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:00.545 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:00.545 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:00.545 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:00.545 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:00.545 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:00.545 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:00.545 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:00.545 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:00.545 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:00.545 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:00.545 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:00.545 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:00.545 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:00.545 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:00.545 + rm -f /tmp/spdk-ld-path 00:01:00.545 + source autorun-spdk.conf 00:01:00.545 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.545 ++ SPDK_TEST_NVMF=1 00:01:00.545 ++ SPDK_TEST_NVME_CLI=1 00:01:00.545 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.545 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.545 ++ SPDK_TEST_VFIOUSER=1 00:01:00.545 ++ SPDK_RUN_UBSAN=1 00:01:00.545 ++ NET_TYPE=phy 00:01:00.545 ++ RUN_NIGHTLY=0 00:01:00.545 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:00.545 + [[ -n '' ]] 00:01:00.545 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.545 + for M in /var/spdk/build-*-manifest.txt 00:01:00.545 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:00.545 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.545 + for M in /var/spdk/build-*-manifest.txt 00:01:00.545 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:00.545 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:00.545 ++ uname 00:01:00.545 + [[ Linux == \L\i\n\u\x ]] 00:01:00.545 + sudo dmesg -T 00:01:00.545 + sudo dmesg --clear 00:01:00.545 + dmesg_pid=491860 00:01:00.545 + [[ Fedora Linux == FreeBSD ]] 00:01:00.545 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:00.545 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:00.545 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:00.545 + [[ -x /usr/src/fio-static/fio ]] 00:01:00.545 + export FIO_BIN=/usr/src/fio-static/fio 00:01:00.545 + FIO_BIN=/usr/src/fio-static/fio 00:01:00.545 + sudo dmesg -Tw 00:01:00.545 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:00.545 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:00.545 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:00.545 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:00.545 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:00.545 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:00.545 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:00.545 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:00.545 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.545 Test configuration: 00:01:00.545 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.545 SPDK_TEST_NVMF=1 00:01:00.545 SPDK_TEST_NVME_CLI=1 00:01:00.545 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.545 SPDK_TEST_NVMF_NICS=e810 00:01:00.545 SPDK_TEST_VFIOUSER=1 00:01:00.545 SPDK_RUN_UBSAN=1 00:01:00.545 NET_TYPE=phy 00:01:00.545 RUN_NIGHTLY=0 10:26:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:00.545 10:26:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:00.545 10:26:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:00.545 10:26:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:00.545 10:26:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.545 10:26:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.545 10:26:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.545 10:26:24 -- paths/export.sh@5 -- $ export PATH 00:01:00.545 10:26:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.545 10:26:24 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:00.545 10:26:24 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:00.545 10:26:24 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718007984.XXXXXX 00:01:00.545 10:26:24 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718007984.Q3CQXQ 00:01:00.545 10:26:24 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:00.545 10:26:24 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:00.545 10:26:24 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:00.545 10:26:24 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:00.545 10:26:24 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:00.545 10:26:24 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:00.545 10:26:24 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:00.545 10:26:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.545 10:26:24 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:00.545 10:26:24 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:00.545 10:26:24 -- pm/common@17 -- $ local monitor 00:01:00.545 10:26:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.545 10:26:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.545 10:26:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.545 10:26:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:00.545 10:26:24 -- pm/common@21 -- $ date +%s 00:01:00.545 10:26:24 -- pm/common@21 -- $ date +%s 00:01:00.545 10:26:24 -- pm/common@25 -- $ sleep 1 00:01:00.545 10:26:24 -- pm/common@21 -- $ date +%s 00:01:00.545 10:26:24 -- pm/common@21 -- $ date +%s 00:01:00.545 10:26:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718007984 00:01:00.545 10:26:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718007984 00:01:00.545 10:26:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718007984 00:01:00.546 10:26:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718007984 00:01:00.546 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718007984_collect-vmstat.pm.log 00:01:00.546 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718007984_collect-cpu-load.pm.log 00:01:00.546 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718007984_collect-cpu-temp.pm.log 00:01:00.546 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718007984_collect-bmc-pm.bmc.pm.log 00:01:01.489 10:26:25 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:01.489 10:26:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:01.489 10:26:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:01.489 10:26:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.489 10:26:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:01.489 Mon Jun 10 08:26:25 AM UTC 2024 00:01:01.489 10:26:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:01.489 v24.09-pre-49-gbab0baf30 00:01:01.489 10:26:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:01.489 10:26:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:01.489 10:26:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:01.489 10:26:25 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:01.489 10:26:25 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:01.489 10:26:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.751 ************************************ 00:01:01.751 START TEST ubsan 00:01:01.751 ************************************ 00:01:01.751 10:26:25 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:01.751 using ubsan 00:01:01.751 00:01:01.751 real 0m0.001s 00:01:01.751 user 0m0.000s 00:01:01.751 sys 0m0.000s 00:01:01.751 10:26:25 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:01.751 10:26:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:01.751 ************************************ 00:01:01.751 END TEST ubsan 00:01:01.751 ************************************ 00:01:01.751 10:26:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:01.751 10:26:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:01.751 10:26:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:01.751 10:26:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:01.751 10:26:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:01.751 10:26:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:01.751 10:26:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:01.751 10:26:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:01.751 10:26:25 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:01.751 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:01.751 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:02.324 Using 'verbs' RDMA provider 00:01:18.181 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:30.460 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:30.460 Creating mk/config.mk...done. 00:01:30.460 Creating mk/cc.flags.mk...done. 00:01:30.460 Type 'make' to build. 00:01:30.460 10:26:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:30.460 10:26:53 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:30.460 10:26:53 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:30.460 10:26:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.460 ************************************ 00:01:30.460 START TEST make 00:01:30.460 ************************************ 00:01:30.460 10:26:54 make -- common/autotest_common.sh@1124 -- $ make -j144 00:01:30.460 make[1]: Nothing to be done for 'all'. 00:01:31.401 The Meson build system 00:01:31.401 Version: 1.3.1 00:01:31.401 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:31.401 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.401 Build type: native build 00:01:31.401 Project name: libvfio-user 00:01:31.401 Project version: 0.0.1 00:01:31.401 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:31.401 C linker for the host machine: cc ld.bfd 2.39-16 00:01:31.401 Host machine cpu family: x86_64 00:01:31.401 Host machine cpu: x86_64 00:01:31.401 Run-time dependency threads found: YES 00:01:31.401 Library dl found: YES 00:01:31.401 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:31.401 Run-time dependency json-c found: YES 0.17 00:01:31.401 Run-time dependency cmocka found: YES 1.1.7 00:01:31.401 Program pytest-3 found: NO 00:01:31.401 Program flake8 found: NO 00:01:31.401 Program misspell-fixer found: NO 00:01:31.401 Program restructuredtext-lint found: NO 00:01:31.401 Program valgrind found: YES (/usr/bin/valgrind) 00:01:31.401 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.401 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.401 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.401 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.401 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:31.401 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:31.401 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:31.401 Build targets in project: 8 00:01:31.401 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:31.401 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:31.401 00:01:31.401 libvfio-user 0.0.1 00:01:31.401 00:01:31.401 User defined options 00:01:31.401 buildtype : debug 00:01:31.401 default_library: shared 00:01:31.401 libdir : /usr/local/lib 00:01:31.401 00:01:31.401 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.970 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.970 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:31.970 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:31.970 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:31.970 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:31.971 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:31.971 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:31.971 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:31.971 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:31.971 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:31.971 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:31.971 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:31.971 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:31.971 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:31.971 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:31.971 [15/37] Compiling C object samples/null.p/null.c.o 00:01:31.971 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:31.971 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:31.971 [18/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:31.971 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:31.971 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:31.971 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:31.971 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:31.971 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:31.971 [24/37] Compiling C object samples/server.p/server.c.o 00:01:31.971 [25/37] Compiling C object samples/client.p/client.c.o 00:01:31.971 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:31.971 [27/37] Linking target samples/client 00:01:31.971 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:31.971 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:31.971 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:31.971 [31/37] Linking target test/unit_tests 00:01:32.231 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:32.231 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:32.231 [34/37] Linking target samples/lspci 00:01:32.231 [35/37] Linking target samples/null 00:01:32.231 [36/37] Linking target samples/server 00:01:32.231 [37/37] Linking target samples/gpio-pci-idio-16 00:01:32.231 INFO: autodetecting backend as ninja 00:01:32.231 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.231 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.493 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.493 ninja: no work to do. 00:01:39.092 The Meson build system 00:01:39.092 Version: 1.3.1 00:01:39.092 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:39.092 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:39.092 Build type: native build 00:01:39.092 Program cat found: YES (/usr/bin/cat) 00:01:39.092 Project name: DPDK 00:01:39.092 Project version: 24.03.0 00:01:39.092 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.092 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.092 Host machine cpu family: x86_64 00:01:39.092 Host machine cpu: x86_64 00:01:39.092 Message: ## Building in Developer Mode ## 00:01:39.092 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.092 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:39.092 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.092 Program python3 found: YES (/usr/bin/python3) 00:01:39.092 Program cat found: YES (/usr/bin/cat) 00:01:39.092 Compiler for C supports arguments -march=native: YES 00:01:39.092 Checking for size of "void *" : 8 00:01:39.092 Checking for size of "void *" : 8 (cached) 00:01:39.092 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:39.092 Library m found: YES 00:01:39.092 Library numa found: YES 00:01:39.092 Has header "numaif.h" : YES 00:01:39.092 Library fdt found: NO 00:01:39.092 Library execinfo found: NO 00:01:39.092 Has header "execinfo.h" : YES 00:01:39.092 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.092 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.092 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.092 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.093 Run-time dependency openssl found: YES 3.0.9 00:01:39.093 Run-time dependency libpcap found: YES 1.10.4 00:01:39.093 Has header "pcap.h" with dependency libpcap: YES 00:01:39.093 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.093 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.093 Compiler for C supports arguments -Wformat: YES 00:01:39.093 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.093 Compiler for C supports arguments -Wformat-security: NO 00:01:39.093 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.093 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.093 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.093 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.093 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.093 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.093 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.093 Compiler for C supports arguments -Wundef: YES 00:01:39.093 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.093 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.093 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.093 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.093 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.093 Program objdump found: YES (/usr/bin/objdump) 00:01:39.093 Compiler for C supports arguments -mavx512f: YES 00:01:39.093 Checking if "AVX512 checking" compiles: YES 00:01:39.093 Fetching value of define "__SSE4_2__" : 1 00:01:39.093 Fetching value of define "__AES__" : 1 00:01:39.093 Fetching value of define "__AVX__" : 1 00:01:39.093 Fetching value of define "__AVX2__" : 1 00:01:39.093 Fetching value of define "__AVX512BW__" : 1 00:01:39.093 Fetching value of define "__AVX512CD__" : 1 00:01:39.093 Fetching value of define "__AVX512DQ__" : 1 00:01:39.093 Fetching value of define "__AVX512F__" : 1 00:01:39.093 Fetching value of define "__AVX512VL__" : 1 00:01:39.093 Fetching value of define "__PCLMUL__" : 1 00:01:39.093 Fetching value of define "__RDRND__" : 1 00:01:39.093 Fetching value of define "__RDSEED__" : 1 00:01:39.093 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:39.093 Fetching value of define "__znver1__" : (undefined) 00:01:39.093 Fetching value of define "__znver2__" : (undefined) 00:01:39.093 Fetching value of define "__znver3__" : (undefined) 00:01:39.093 Fetching value of define "__znver4__" : (undefined) 00:01:39.093 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.093 Message: lib/log: Defining dependency "log" 00:01:39.093 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.093 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.093 Checking for function "getentropy" : NO 00:01:39.093 Message: lib/eal: Defining dependency "eal" 00:01:39.093 Message: lib/ring: Defining dependency "ring" 00:01:39.093 Message: lib/rcu: Defining dependency "rcu" 00:01:39.093 Message: lib/mempool: Defining dependency "mempool" 00:01:39.093 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.093 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.093 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:39.093 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:39.093 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:39.093 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:39.093 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:39.093 Compiler for C supports arguments -mpclmul: YES 00:01:39.093 Compiler for C supports arguments -maes: YES 00:01:39.093 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.093 Compiler for C supports arguments -mavx512bw: YES 00:01:39.093 Compiler for C supports arguments -mavx512dq: YES 00:01:39.093 Compiler for C supports arguments -mavx512vl: YES 00:01:39.093 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.093 Compiler for C supports arguments -mavx2: YES 00:01:39.093 Compiler for C supports arguments -mavx: YES 00:01:39.093 Message: lib/net: Defining dependency "net" 00:01:39.093 Message: lib/meter: Defining dependency "meter" 00:01:39.093 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.093 Message: lib/pci: Defining dependency "pci" 00:01:39.093 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.093 Message: lib/hash: Defining dependency "hash" 00:01:39.093 Message: lib/timer: Defining dependency "timer" 00:01:39.093 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.093 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.093 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.093 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.093 Message: lib/power: Defining dependency "power" 00:01:39.093 Message: lib/reorder: Defining dependency "reorder" 00:01:39.093 Message: lib/security: Defining dependency "security" 00:01:39.093 Has header "linux/userfaultfd.h" : YES 00:01:39.093 Has header "linux/vduse.h" : YES 00:01:39.093 Message: lib/vhost: Defining dependency "vhost" 00:01:39.093 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:39.093 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:39.093 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:39.093 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:39.093 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:39.093 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:39.093 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:39.093 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:39.093 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:39.093 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:39.093 Program doxygen found: YES (/usr/bin/doxygen) 00:01:39.093 Configuring doxy-api-html.conf using configuration 00:01:39.093 Configuring doxy-api-man.conf using configuration 00:01:39.093 Program mandb found: YES (/usr/bin/mandb) 00:01:39.093 Program sphinx-build found: NO 00:01:39.093 Configuring rte_build_config.h using configuration 00:01:39.093 Message: 00:01:39.093 ================= 00:01:39.093 Applications Enabled 00:01:39.093 ================= 00:01:39.093 00:01:39.093 apps: 00:01:39.093 00:01:39.093 00:01:39.093 Message: 00:01:39.093 ================= 00:01:39.093 Libraries Enabled 00:01:39.093 ================= 00:01:39.093 00:01:39.093 libs: 00:01:39.093 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:39.093 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:39.093 cryptodev, dmadev, power, reorder, security, vhost, 00:01:39.093 00:01:39.093 Message: 00:01:39.093 =============== 00:01:39.093 Drivers Enabled 00:01:39.093 =============== 00:01:39.093 00:01:39.093 common: 00:01:39.093 00:01:39.093 bus: 00:01:39.093 pci, vdev, 00:01:39.093 mempool: 00:01:39.093 ring, 00:01:39.093 dma: 00:01:39.093 00:01:39.093 net: 00:01:39.093 00:01:39.093 crypto: 00:01:39.093 00:01:39.093 compress: 00:01:39.093 00:01:39.093 vdpa: 00:01:39.093 00:01:39.093 00:01:39.093 Message: 00:01:39.093 ================= 00:01:39.093 Content Skipped 00:01:39.093 ================= 00:01:39.093 00:01:39.093 apps: 00:01:39.093 dumpcap: explicitly disabled via build config 00:01:39.093 graph: explicitly disabled via build config 00:01:39.093 pdump: explicitly disabled via build config 00:01:39.093 proc-info: explicitly disabled via build config 00:01:39.093 test-acl: explicitly disabled via build config 00:01:39.093 test-bbdev: explicitly disabled via build config 00:01:39.093 test-cmdline: explicitly disabled via build config 00:01:39.093 test-compress-perf: explicitly disabled via build config 00:01:39.093 test-crypto-perf: explicitly disabled via build config 00:01:39.093 test-dma-perf: explicitly disabled via build config 00:01:39.093 test-eventdev: explicitly disabled via build config 00:01:39.093 test-fib: explicitly disabled via build config 00:01:39.093 test-flow-perf: explicitly disabled via build config 00:01:39.093 test-gpudev: explicitly disabled via build config 00:01:39.093 test-mldev: explicitly disabled via build config 00:01:39.093 test-pipeline: explicitly disabled via build config 00:01:39.093 test-pmd: explicitly disabled via build config 00:01:39.093 test-regex: explicitly disabled via build config 00:01:39.093 test-sad: explicitly disabled via build config 00:01:39.093 test-security-perf: explicitly disabled via build config 00:01:39.093 00:01:39.093 libs: 00:01:39.093 argparse: explicitly disabled via build config 00:01:39.093 metrics: explicitly disabled via build config 00:01:39.093 acl: explicitly disabled via build config 00:01:39.093 bbdev: explicitly disabled via build config 00:01:39.093 bitratestats: explicitly disabled via build config 00:01:39.093 bpf: explicitly disabled via build config 00:01:39.093 cfgfile: explicitly disabled via build config 00:01:39.093 distributor: explicitly disabled via build config 00:01:39.093 efd: explicitly disabled via build config 00:01:39.093 eventdev: explicitly disabled via build config 00:01:39.093 dispatcher: explicitly disabled via build config 00:01:39.093 gpudev: explicitly disabled via build config 00:01:39.093 gro: explicitly disabled via build config 00:01:39.093 gso: explicitly disabled via build config 00:01:39.093 ip_frag: explicitly disabled via build config 00:01:39.093 jobstats: explicitly disabled via build config 00:01:39.093 latencystats: explicitly disabled via build config 00:01:39.093 lpm: explicitly disabled via build config 00:01:39.093 member: explicitly disabled via build config 00:01:39.093 pcapng: explicitly disabled via build config 00:01:39.093 rawdev: explicitly disabled via build config 00:01:39.093 regexdev: explicitly disabled via build config 00:01:39.094 mldev: explicitly disabled via build config 00:01:39.094 rib: explicitly disabled via build config 00:01:39.094 sched: explicitly disabled via build config 00:01:39.094 stack: explicitly disabled via build config 00:01:39.094 ipsec: explicitly disabled via build config 00:01:39.094 pdcp: explicitly disabled via build config 00:01:39.094 fib: explicitly disabled via build config 00:01:39.094 port: explicitly disabled via build config 00:01:39.094 pdump: explicitly disabled via build config 00:01:39.094 table: explicitly disabled via build config 00:01:39.094 pipeline: explicitly disabled via build config 00:01:39.094 graph: explicitly disabled via build config 00:01:39.094 node: explicitly disabled via build config 00:01:39.094 00:01:39.094 drivers: 00:01:39.094 common/cpt: not in enabled drivers build config 00:01:39.094 common/dpaax: not in enabled drivers build config 00:01:39.094 common/iavf: not in enabled drivers build config 00:01:39.094 common/idpf: not in enabled drivers build config 00:01:39.094 common/ionic: not in enabled drivers build config 00:01:39.094 common/mvep: not in enabled drivers build config 00:01:39.094 common/octeontx: not in enabled drivers build config 00:01:39.094 bus/auxiliary: not in enabled drivers build config 00:01:39.094 bus/cdx: not in enabled drivers build config 00:01:39.094 bus/dpaa: not in enabled drivers build config 00:01:39.094 bus/fslmc: not in enabled drivers build config 00:01:39.094 bus/ifpga: not in enabled drivers build config 00:01:39.094 bus/platform: not in enabled drivers build config 00:01:39.094 bus/uacce: not in enabled drivers build config 00:01:39.094 bus/vmbus: not in enabled drivers build config 00:01:39.094 common/cnxk: not in enabled drivers build config 00:01:39.094 common/mlx5: not in enabled drivers build config 00:01:39.094 common/nfp: not in enabled drivers build config 00:01:39.094 common/nitrox: not in enabled drivers build config 00:01:39.094 common/qat: not in enabled drivers build config 00:01:39.094 common/sfc_efx: not in enabled drivers build config 00:01:39.094 mempool/bucket: not in enabled drivers build config 00:01:39.094 mempool/cnxk: not in enabled drivers build config 00:01:39.094 mempool/dpaa: not in enabled drivers build config 00:01:39.094 mempool/dpaa2: not in enabled drivers build config 00:01:39.094 mempool/octeontx: not in enabled drivers build config 00:01:39.094 mempool/stack: not in enabled drivers build config 00:01:39.094 dma/cnxk: not in enabled drivers build config 00:01:39.094 dma/dpaa: not in enabled drivers build config 00:01:39.094 dma/dpaa2: not in enabled drivers build config 00:01:39.094 dma/hisilicon: not in enabled drivers build config 00:01:39.094 dma/idxd: not in enabled drivers build config 00:01:39.094 dma/ioat: not in enabled drivers build config 00:01:39.094 dma/skeleton: not in enabled drivers build config 00:01:39.094 net/af_packet: not in enabled drivers build config 00:01:39.094 net/af_xdp: not in enabled drivers build config 00:01:39.094 net/ark: not in enabled drivers build config 00:01:39.094 net/atlantic: not in enabled drivers build config 00:01:39.094 net/avp: not in enabled drivers build config 00:01:39.094 net/axgbe: not in enabled drivers build config 00:01:39.094 net/bnx2x: not in enabled drivers build config 00:01:39.094 net/bnxt: not in enabled drivers build config 00:01:39.094 net/bonding: not in enabled drivers build config 00:01:39.094 net/cnxk: not in enabled drivers build config 00:01:39.094 net/cpfl: not in enabled drivers build config 00:01:39.094 net/cxgbe: not in enabled drivers build config 00:01:39.094 net/dpaa: not in enabled drivers build config 00:01:39.094 net/dpaa2: not in enabled drivers build config 00:01:39.094 net/e1000: not in enabled drivers build config 00:01:39.094 net/ena: not in enabled drivers build config 00:01:39.094 net/enetc: not in enabled drivers build config 00:01:39.094 net/enetfec: not in enabled drivers build config 00:01:39.094 net/enic: not in enabled drivers build config 00:01:39.094 net/failsafe: not in enabled drivers build config 00:01:39.094 net/fm10k: not in enabled drivers build config 00:01:39.094 net/gve: not in enabled drivers build config 00:01:39.094 net/hinic: not in enabled drivers build config 00:01:39.094 net/hns3: not in enabled drivers build config 00:01:39.094 net/i40e: not in enabled drivers build config 00:01:39.094 net/iavf: not in enabled drivers build config 00:01:39.094 net/ice: not in enabled drivers build config 00:01:39.094 net/idpf: not in enabled drivers build config 00:01:39.094 net/igc: not in enabled drivers build config 00:01:39.094 net/ionic: not in enabled drivers build config 00:01:39.094 net/ipn3ke: not in enabled drivers build config 00:01:39.094 net/ixgbe: not in enabled drivers build config 00:01:39.094 net/mana: not in enabled drivers build config 00:01:39.094 net/memif: not in enabled drivers build config 00:01:39.094 net/mlx4: not in enabled drivers build config 00:01:39.094 net/mlx5: not in enabled drivers build config 00:01:39.094 net/mvneta: not in enabled drivers build config 00:01:39.094 net/mvpp2: not in enabled drivers build config 00:01:39.094 net/netvsc: not in enabled drivers build config 00:01:39.094 net/nfb: not in enabled drivers build config 00:01:39.094 net/nfp: not in enabled drivers build config 00:01:39.094 net/ngbe: not in enabled drivers build config 00:01:39.094 net/null: not in enabled drivers build config 00:01:39.094 net/octeontx: not in enabled drivers build config 00:01:39.094 net/octeon_ep: not in enabled drivers build config 00:01:39.094 net/pcap: not in enabled drivers build config 00:01:39.094 net/pfe: not in enabled drivers build config 00:01:39.094 net/qede: not in enabled drivers build config 00:01:39.094 net/ring: not in enabled drivers build config 00:01:39.094 net/sfc: not in enabled drivers build config 00:01:39.094 net/softnic: not in enabled drivers build config 00:01:39.094 net/tap: not in enabled drivers build config 00:01:39.094 net/thunderx: not in enabled drivers build config 00:01:39.094 net/txgbe: not in enabled drivers build config 00:01:39.094 net/vdev_netvsc: not in enabled drivers build config 00:01:39.094 net/vhost: not in enabled drivers build config 00:01:39.094 net/virtio: not in enabled drivers build config 00:01:39.094 net/vmxnet3: not in enabled drivers build config 00:01:39.094 raw/*: missing internal dependency, "rawdev" 00:01:39.094 crypto/armv8: not in enabled drivers build config 00:01:39.094 crypto/bcmfs: not in enabled drivers build config 00:01:39.094 crypto/caam_jr: not in enabled drivers build config 00:01:39.094 crypto/ccp: not in enabled drivers build config 00:01:39.094 crypto/cnxk: not in enabled drivers build config 00:01:39.094 crypto/dpaa_sec: not in enabled drivers build config 00:01:39.094 crypto/dpaa2_sec: not in enabled drivers build config 00:01:39.094 crypto/ipsec_mb: not in enabled drivers build config 00:01:39.094 crypto/mlx5: not in enabled drivers build config 00:01:39.094 crypto/mvsam: not in enabled drivers build config 00:01:39.094 crypto/nitrox: not in enabled drivers build config 00:01:39.094 crypto/null: not in enabled drivers build config 00:01:39.094 crypto/octeontx: not in enabled drivers build config 00:01:39.094 crypto/openssl: not in enabled drivers build config 00:01:39.094 crypto/scheduler: not in enabled drivers build config 00:01:39.094 crypto/uadk: not in enabled drivers build config 00:01:39.094 crypto/virtio: not in enabled drivers build config 00:01:39.094 compress/isal: not in enabled drivers build config 00:01:39.094 compress/mlx5: not in enabled drivers build config 00:01:39.094 compress/nitrox: not in enabled drivers build config 00:01:39.094 compress/octeontx: not in enabled drivers build config 00:01:39.094 compress/zlib: not in enabled drivers build config 00:01:39.094 regex/*: missing internal dependency, "regexdev" 00:01:39.094 ml/*: missing internal dependency, "mldev" 00:01:39.094 vdpa/ifc: not in enabled drivers build config 00:01:39.094 vdpa/mlx5: not in enabled drivers build config 00:01:39.094 vdpa/nfp: not in enabled drivers build config 00:01:39.094 vdpa/sfc: not in enabled drivers build config 00:01:39.094 event/*: missing internal dependency, "eventdev" 00:01:39.094 baseband/*: missing internal dependency, "bbdev" 00:01:39.094 gpu/*: missing internal dependency, "gpudev" 00:01:39.094 00:01:39.094 00:01:39.094 Build targets in project: 84 00:01:39.094 00:01:39.094 DPDK 24.03.0 00:01:39.094 00:01:39.094 User defined options 00:01:39.094 buildtype : debug 00:01:39.094 default_library : shared 00:01:39.094 libdir : lib 00:01:39.094 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:39.094 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:39.094 c_link_args : 00:01:39.094 cpu_instruction_set: native 00:01:39.094 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:39.094 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:39.094 enable_docs : false 00:01:39.094 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:39.094 enable_kmods : false 00:01:39.094 tests : false 00:01:39.094 00:01:39.094 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.094 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:39.094 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:39.094 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:39.094 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:39.094 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:39.094 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:39.094 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:39.094 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:39.354 [8/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:39.354 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.354 [10/267] Linking static target lib/librte_kvargs.a 00:01:39.354 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.354 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:39.354 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:39.354 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:39.354 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.354 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:39.354 [17/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:39.354 [18/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:39.354 [19/267] Linking static target lib/librte_log.a 00:01:39.354 [20/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.354 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.354 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.354 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.354 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.354 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.354 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.354 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.354 [28/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:39.354 [29/267] Linking static target lib/librte_pci.a 00:01:39.354 [30/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.354 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.354 [32/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.612 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.612 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.612 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.612 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:39.612 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.612 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.612 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:39.612 [40/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.612 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:39.612 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.612 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:39.613 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:39.613 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:39.613 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:39.613 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:39.613 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:39.613 [49/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.613 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:39.873 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:39.873 [52/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.873 [53/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.873 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.873 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:39.873 [56/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.873 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:39.873 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:39.873 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:39.873 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:39.873 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:39.873 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.873 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:39.873 [64/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:39.873 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:39.873 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:39.873 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.873 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:39.873 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:39.873 [70/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.873 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:39.873 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:39.873 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:39.873 [74/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.873 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.873 [76/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.873 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:39.873 [78/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.873 [79/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.873 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:39.873 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:39.873 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:39.873 [83/267] Linking static target lib/librte_meter.a 00:01:39.873 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:39.873 [85/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:39.873 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.873 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:39.873 [88/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.873 [89/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:39.873 [90/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.873 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.873 [92/267] Linking static target lib/librte_ring.a 00:01:39.873 [93/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:39.873 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:39.873 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:39.873 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:39.873 [97/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:39.873 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:39.873 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:39.873 [100/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:39.873 [101/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:39.873 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.873 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:39.873 [104/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.873 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.873 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.873 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.873 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.873 [109/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:39.873 [110/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:39.873 [111/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.873 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.873 [113/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.873 [114/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:39.873 [115/267] Linking static target lib/librte_dmadev.a 00:01:39.873 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.873 [117/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.873 [118/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.873 [119/267] Linking static target lib/librte_telemetry.a 00:01:39.873 [120/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.873 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.873 [122/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.873 [123/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.873 [124/267] Linking static target lib/librte_cmdline.a 00:01:39.873 [125/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.873 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.873 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:39.873 [128/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:39.873 [129/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:39.873 [130/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.873 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.873 [132/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.873 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:39.873 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.873 [135/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.873 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.873 [137/267] Linking static target lib/librte_rcu.a 00:01:39.873 [138/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.873 [139/267] Linking static target lib/librte_net.a 00:01:39.873 [140/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.873 [141/267] Linking static target lib/librte_timer.a 00:01:39.873 [142/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.873 [143/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.873 [144/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.873 [145/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.873 [146/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:39.873 [147/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.873 [148/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.873 [149/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.873 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.873 [151/267] Linking static target lib/librte_compressdev.a 00:01:39.873 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.873 [153/267] Linking static target lib/librte_reorder.a 00:01:39.873 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.873 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:39.873 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:39.873 [157/267] Linking target lib/librte_log.so.24.1 00:01:39.873 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.873 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.134 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.134 [161/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.134 [162/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.134 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.134 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.134 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.134 [166/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.134 [167/267] Linking static target lib/librte_mempool.a 00:01:40.134 [168/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.134 [169/267] Linking static target lib/librte_security.a 00:01:40.134 [170/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.134 [171/267] Linking static target lib/librte_power.a 00:01:40.134 [172/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.134 [173/267] Linking static target lib/librte_eal.a 00:01:40.134 [174/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.134 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.135 [176/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.135 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.135 [178/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.135 [179/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:40.135 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.135 [181/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.135 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.135 [183/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.135 [184/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.135 [185/267] Linking static target lib/librte_mbuf.a 00:01:40.135 [186/267] Linking static target drivers/librte_bus_vdev.a 00:01:40.135 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.135 [188/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.135 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.135 [190/267] Linking target lib/librte_kvargs.so.24.1 00:01:40.135 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:40.135 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.135 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.135 [194/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.135 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:40.135 [196/267] Linking static target lib/librte_hash.a 00:01:40.135 [197/267] Linking static target drivers/librte_bus_pci.a 00:01:40.396 [198/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.396 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:40.396 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.396 [201/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.396 [202/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.396 [203/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.396 [204/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.396 [205/267] Linking static target drivers/librte_mempool_ring.a 00:01:40.396 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.396 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.396 [208/267] Linking static target lib/librte_cryptodev.a 00:01:40.396 [209/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.396 [210/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.657 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.657 [212/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.657 [213/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:40.657 [214/267] Linking target lib/librte_telemetry.so.24.1 00:01:40.657 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.657 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.657 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.657 [218/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:40.918 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:40.918 [220/267] Linking static target lib/librte_ethdev.a 00:01:40.918 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.918 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.918 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.179 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.179 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.179 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.191 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:42.191 [228/267] Linking static target lib/librte_vhost.a 00:01:42.452 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.370 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.964 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.537 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.798 [233/267] Linking target lib/librte_eal.so.24.1 00:01:51.798 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.059 [235/267] Linking target lib/librte_meter.so.24.1 00:01:52.059 [236/267] Linking target lib/librte_ring.so.24.1 00:01:52.059 [237/267] Linking target lib/librte_pci.so.24.1 00:01:52.059 [238/267] Linking target lib/librte_timer.so.24.1 00:01:52.059 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:52.059 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.059 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.059 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.059 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.059 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:52.059 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.059 [246/267] Linking target lib/librte_mempool.so.24.1 00:01:52.059 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:52.059 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:52.320 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:52.320 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:52.320 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:52.320 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:52.582 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:52.582 [254/267] Linking target lib/librte_compressdev.so.24.1 00:01:52.582 [255/267] Linking target lib/librte_net.so.24.1 00:01:52.582 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:52.582 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:52.582 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:52.582 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:52.843 [260/267] Linking target lib/librte_hash.so.24.1 00:01:52.843 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:52.843 [262/267] Linking target lib/librte_ethdev.so.24.1 00:01:52.843 [263/267] Linking target lib/librte_security.so.24.1 00:01:52.843 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:52.843 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:53.105 [266/267] Linking target lib/librte_power.so.24.1 00:01:53.105 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:53.105 INFO: autodetecting backend as ninja 00:01:53.105 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:54.049 CC lib/ut/ut.o 00:01:54.049 CC lib/ut_mock/mock.o 00:01:54.049 CC lib/log/log.o 00:01:54.049 CC lib/log/log_flags.o 00:01:54.049 CC lib/log/log_deprecated.o 00:01:54.310 LIB libspdk_ut.a 00:01:54.310 LIB libspdk_log.a 00:01:54.310 LIB libspdk_ut_mock.a 00:01:54.310 SO libspdk_ut.so.2.0 00:01:54.310 SO libspdk_ut_mock.so.6.0 00:01:54.310 SO libspdk_log.so.7.0 00:01:54.310 SYMLINK libspdk_ut.so 00:01:54.310 SYMLINK libspdk_ut_mock.so 00:01:54.310 SYMLINK libspdk_log.so 00:01:54.882 CC lib/dma/dma.o 00:01:54.882 CC lib/util/base64.o 00:01:54.882 CXX lib/trace_parser/trace.o 00:01:54.882 CC lib/util/bit_array.o 00:01:54.882 CC lib/util/cpuset.o 00:01:54.882 CC lib/util/crc16.o 00:01:54.882 CC lib/ioat/ioat.o 00:01:54.882 CC lib/util/crc32.o 00:01:54.882 CC lib/util/crc32c.o 00:01:54.882 CC lib/util/crc32_ieee.o 00:01:54.882 CC lib/util/crc64.o 00:01:54.882 CC lib/util/dif.o 00:01:54.882 CC lib/util/fd.o 00:01:54.882 CC lib/util/file.o 00:01:54.882 CC lib/util/hexlify.o 00:01:54.882 CC lib/util/iov.o 00:01:54.882 CC lib/util/math.o 00:01:54.882 CC lib/util/pipe.o 00:01:54.882 CC lib/util/strerror_tls.o 00:01:54.882 CC lib/util/string.o 00:01:54.882 CC lib/util/uuid.o 00:01:54.882 CC lib/util/fd_group.o 00:01:54.882 CC lib/util/xor.o 00:01:54.882 CC lib/util/zipf.o 00:01:54.882 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.882 CC lib/vfio_user/host/vfio_user.o 00:01:54.882 LIB libspdk_dma.a 00:01:54.882 SO libspdk_dma.so.4.0 00:01:55.144 LIB libspdk_ioat.a 00:01:55.144 SYMLINK libspdk_dma.so 00:01:55.144 SO libspdk_ioat.so.7.0 00:01:55.144 SYMLINK libspdk_ioat.so 00:01:55.144 LIB libspdk_vfio_user.a 00:01:55.144 SO libspdk_vfio_user.so.5.0 00:01:55.144 LIB libspdk_util.a 00:01:55.405 SYMLINK libspdk_vfio_user.so 00:01:55.405 SO libspdk_util.so.9.0 00:01:55.405 SYMLINK libspdk_util.so 00:01:55.666 LIB libspdk_trace_parser.a 00:01:55.666 SO libspdk_trace_parser.so.5.0 00:01:55.666 SYMLINK libspdk_trace_parser.so 00:01:55.928 CC lib/idxd/idxd.o 00:01:55.928 CC lib/vmd/vmd.o 00:01:55.928 CC lib/env_dpdk/env.o 00:01:55.928 CC lib/json/json_parse.o 00:01:55.928 CC lib/idxd/idxd_user.o 00:01:55.928 CC lib/env_dpdk/memory.o 00:01:55.928 CC lib/json/json_util.o 00:01:55.928 CC lib/vmd/led.o 00:01:55.928 CC lib/idxd/idxd_kernel.o 00:01:55.928 CC lib/env_dpdk/pci.o 00:01:55.928 CC lib/json/json_write.o 00:01:55.928 CC lib/env_dpdk/threads.o 00:01:55.928 CC lib/env_dpdk/init.o 00:01:55.928 CC lib/conf/conf.o 00:01:55.928 CC lib/env_dpdk/pci_ioat.o 00:01:55.928 CC lib/rdma/common.o 00:01:55.928 CC lib/env_dpdk/pci_virtio.o 00:01:55.928 CC lib/rdma/rdma_verbs.o 00:01:55.928 CC lib/env_dpdk/pci_vmd.o 00:01:55.928 CC lib/env_dpdk/pci_idxd.o 00:01:55.928 CC lib/env_dpdk/pci_event.o 00:01:55.928 CC lib/env_dpdk/pci_dpdk.o 00:01:55.928 CC lib/env_dpdk/sigbus_handler.o 00:01:55.928 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.928 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:56.190 LIB libspdk_conf.a 00:01:56.190 SO libspdk_conf.so.6.0 00:01:56.190 LIB libspdk_json.a 00:01:56.190 LIB libspdk_rdma.a 00:01:56.190 SYMLINK libspdk_conf.so 00:01:56.190 SO libspdk_json.so.6.0 00:01:56.190 SO libspdk_rdma.so.6.0 00:01:56.190 SYMLINK libspdk_json.so 00:01:56.190 SYMLINK libspdk_rdma.so 00:01:56.451 LIB libspdk_idxd.a 00:01:56.451 SO libspdk_idxd.so.12.0 00:01:56.451 LIB libspdk_vmd.a 00:01:56.452 SO libspdk_vmd.so.6.0 00:01:56.452 SYMLINK libspdk_idxd.so 00:01:56.452 SYMLINK libspdk_vmd.so 00:01:56.713 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.713 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.713 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.713 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.974 LIB libspdk_jsonrpc.a 00:01:56.974 SO libspdk_jsonrpc.so.6.0 00:01:56.974 SYMLINK libspdk_jsonrpc.so 00:01:56.974 LIB libspdk_env_dpdk.a 00:01:57.236 SO libspdk_env_dpdk.so.14.0 00:01:57.236 SYMLINK libspdk_env_dpdk.so 00:01:57.497 CC lib/rpc/rpc.o 00:01:57.497 LIB libspdk_rpc.a 00:01:57.497 SO libspdk_rpc.so.6.0 00:01:57.759 SYMLINK libspdk_rpc.so 00:01:58.019 CC lib/notify/notify.o 00:01:58.019 CC lib/notify/notify_rpc.o 00:01:58.019 CC lib/keyring/keyring.o 00:01:58.019 CC lib/trace/trace.o 00:01:58.019 CC lib/trace/trace_flags.o 00:01:58.019 CC lib/keyring/keyring_rpc.o 00:01:58.019 CC lib/trace/trace_rpc.o 00:01:58.281 LIB libspdk_notify.a 00:01:58.281 SO libspdk_notify.so.6.0 00:01:58.281 LIB libspdk_keyring.a 00:01:58.281 LIB libspdk_trace.a 00:01:58.281 SO libspdk_keyring.so.1.0 00:01:58.281 SYMLINK libspdk_notify.so 00:01:58.281 SO libspdk_trace.so.10.0 00:01:58.281 SYMLINK libspdk_keyring.so 00:01:58.542 SYMLINK libspdk_trace.so 00:01:58.803 CC lib/thread/thread.o 00:01:58.803 CC lib/thread/iobuf.o 00:01:58.803 CC lib/sock/sock.o 00:01:58.803 CC lib/sock/sock_rpc.o 00:01:59.063 LIB libspdk_sock.a 00:01:59.063 SO libspdk_sock.so.9.0 00:01:59.325 SYMLINK libspdk_sock.so 00:01:59.586 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:59.586 CC lib/nvme/nvme_ctrlr.o 00:01:59.586 CC lib/nvme/nvme_ns_cmd.o 00:01:59.586 CC lib/nvme/nvme_fabric.o 00:01:59.586 CC lib/nvme/nvme_ns.o 00:01:59.586 CC lib/nvme/nvme_pcie_common.o 00:01:59.586 CC lib/nvme/nvme_pcie.o 00:01:59.586 CC lib/nvme/nvme_qpair.o 00:01:59.586 CC lib/nvme/nvme.o 00:01:59.586 CC lib/nvme/nvme_quirks.o 00:01:59.586 CC lib/nvme/nvme_transport.o 00:01:59.586 CC lib/nvme/nvme_discovery.o 00:01:59.586 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:59.586 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:59.586 CC lib/nvme/nvme_tcp.o 00:01:59.586 CC lib/nvme/nvme_opal.o 00:01:59.586 CC lib/nvme/nvme_io_msg.o 00:01:59.586 CC lib/nvme/nvme_poll_group.o 00:01:59.586 CC lib/nvme/nvme_stubs.o 00:01:59.586 CC lib/nvme/nvme_zns.o 00:01:59.586 CC lib/nvme/nvme_auth.o 00:01:59.586 CC lib/nvme/nvme_cuse.o 00:01:59.586 CC lib/nvme/nvme_vfio_user.o 00:01:59.586 CC lib/nvme/nvme_rdma.o 00:02:00.159 LIB libspdk_thread.a 00:02:00.159 SO libspdk_thread.so.10.0 00:02:00.159 SYMLINK libspdk_thread.so 00:02:00.420 CC lib/blob/blobstore.o 00:02:00.420 CC lib/blob/request.o 00:02:00.420 CC lib/blob/zeroes.o 00:02:00.420 CC lib/blob/blob_bs_dev.o 00:02:00.420 CC lib/init/json_config.o 00:02:00.420 CC lib/init/subsystem.o 00:02:00.420 CC lib/init/rpc.o 00:02:00.420 CC lib/init/subsystem_rpc.o 00:02:00.420 CC lib/virtio/virtio_vhost_user.o 00:02:00.420 CC lib/virtio/virtio.o 00:02:00.420 CC lib/virtio/virtio_vfio_user.o 00:02:00.420 CC lib/virtio/virtio_pci.o 00:02:00.420 CC lib/accel/accel.o 00:02:00.420 CC lib/accel/accel_rpc.o 00:02:00.420 CC lib/accel/accel_sw.o 00:02:00.420 CC lib/vfu_tgt/tgt_endpoint.o 00:02:00.420 CC lib/vfu_tgt/tgt_rpc.o 00:02:00.681 LIB libspdk_init.a 00:02:00.681 SO libspdk_init.so.5.0 00:02:00.681 LIB libspdk_vfu_tgt.a 00:02:00.681 LIB libspdk_virtio.a 00:02:00.942 SYMLINK libspdk_init.so 00:02:00.942 SO libspdk_vfu_tgt.so.3.0 00:02:00.942 SO libspdk_virtio.so.7.0 00:02:00.942 SYMLINK libspdk_vfu_tgt.so 00:02:00.942 SYMLINK libspdk_virtio.so 00:02:01.204 CC lib/event/app.o 00:02:01.204 CC lib/event/reactor.o 00:02:01.204 CC lib/event/log_rpc.o 00:02:01.204 CC lib/event/app_rpc.o 00:02:01.204 CC lib/event/scheduler_static.o 00:02:01.466 LIB libspdk_accel.a 00:02:01.466 SO libspdk_accel.so.15.0 00:02:01.466 LIB libspdk_nvme.a 00:02:01.466 SYMLINK libspdk_accel.so 00:02:01.466 SO libspdk_nvme.so.13.0 00:02:01.466 LIB libspdk_event.a 00:02:01.727 SO libspdk_event.so.13.1 00:02:01.727 SYMLINK libspdk_event.so 00:02:01.727 CC lib/bdev/bdev.o 00:02:01.727 CC lib/bdev/bdev_rpc.o 00:02:01.727 CC lib/bdev/bdev_zone.o 00:02:01.727 CC lib/bdev/part.o 00:02:01.727 CC lib/bdev/scsi_nvme.o 00:02:01.989 SYMLINK libspdk_nvme.so 00:02:02.931 LIB libspdk_blob.a 00:02:02.931 SO libspdk_blob.so.11.0 00:02:02.931 SYMLINK libspdk_blob.so 00:02:03.504 CC lib/lvol/lvol.o 00:02:03.504 CC lib/blobfs/blobfs.o 00:02:03.504 CC lib/blobfs/tree.o 00:02:04.078 LIB libspdk_bdev.a 00:02:04.078 SO libspdk_bdev.so.15.0 00:02:04.078 LIB libspdk_blobfs.a 00:02:04.078 SYMLINK libspdk_bdev.so 00:02:04.078 SO libspdk_blobfs.so.10.0 00:02:04.078 LIB libspdk_lvol.a 00:02:04.339 SO libspdk_lvol.so.10.0 00:02:04.339 SYMLINK libspdk_blobfs.so 00:02:04.339 SYMLINK libspdk_lvol.so 00:02:04.599 CC lib/ftl/ftl_core.o 00:02:04.599 CC lib/ftl/ftl_init.o 00:02:04.599 CC lib/nvmf/ctrlr.o 00:02:04.599 CC lib/ftl/ftl_layout.o 00:02:04.599 CC lib/nvmf/ctrlr_discovery.o 00:02:04.599 CC lib/ftl/ftl_debug.o 00:02:04.599 CC lib/ftl/ftl_io.o 00:02:04.599 CC lib/nvmf/ctrlr_bdev.o 00:02:04.599 CC lib/scsi/dev.o 00:02:04.599 CC lib/ftl/ftl_sb.o 00:02:04.599 CC lib/nvmf/subsystem.o 00:02:04.599 CC lib/scsi/lun.o 00:02:04.599 CC lib/nvmf/nvmf.o 00:02:04.599 CC lib/ftl/ftl_l2p.o 00:02:04.599 CC lib/nvmf/transport.o 00:02:04.599 CC lib/ftl/ftl_l2p_flat.o 00:02:04.599 CC lib/nvmf/nvmf_rpc.o 00:02:04.599 CC lib/scsi/port.o 00:02:04.599 CC lib/ftl/ftl_nv_cache.o 00:02:04.599 CC lib/scsi/scsi.o 00:02:04.599 CC lib/ftl/ftl_band.o 00:02:04.599 CC lib/scsi/scsi_bdev.o 00:02:04.599 CC lib/nbd/nbd.o 00:02:04.599 CC lib/nvmf/tcp.o 00:02:04.599 CC lib/nvmf/stubs.o 00:02:04.599 CC lib/ftl/ftl_band_ops.o 00:02:04.599 CC lib/nvmf/vfio_user.o 00:02:04.599 CC lib/scsi/scsi_pr.o 00:02:04.599 CC lib/nbd/nbd_rpc.o 00:02:04.599 CC lib/ftl/ftl_writer.o 00:02:04.599 CC lib/scsi/scsi_rpc.o 00:02:04.599 CC lib/nvmf/mdns_server.o 00:02:04.599 CC lib/ublk/ublk.o 00:02:04.599 CC lib/ftl/ftl_rq.o 00:02:04.599 CC lib/nvmf/rdma.o 00:02:04.599 CC lib/ublk/ublk_rpc.o 00:02:04.599 CC lib/scsi/task.o 00:02:04.599 CC lib/ftl/ftl_reloc.o 00:02:04.599 CC lib/nvmf/auth.o 00:02:04.599 CC lib/ftl/ftl_l2p_cache.o 00:02:04.599 CC lib/ftl/ftl_p2l.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.599 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.599 CC lib/ftl/utils/ftl_conf.o 00:02:04.599 CC lib/ftl/utils/ftl_md.o 00:02:04.599 CC lib/ftl/utils/ftl_mempool.o 00:02:04.600 CC lib/ftl/utils/ftl_bitmap.o 00:02:04.600 CC lib/ftl/utils/ftl_property.o 00:02:04.600 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:04.600 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:04.600 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:04.600 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:04.600 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:04.600 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:04.600 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:04.600 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:04.600 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:04.600 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:04.600 CC lib/ftl/base/ftl_base_dev.o 00:02:04.600 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:04.600 CC lib/ftl/base/ftl_base_bdev.o 00:02:04.600 CC lib/ftl/ftl_trace.o 00:02:05.168 LIB libspdk_nbd.a 00:02:05.168 SO libspdk_nbd.so.7.0 00:02:05.168 LIB libspdk_scsi.a 00:02:05.168 SYMLINK libspdk_nbd.so 00:02:05.168 SO libspdk_scsi.so.9.0 00:02:05.168 SYMLINK libspdk_scsi.so 00:02:05.168 LIB libspdk_ublk.a 00:02:05.168 SO libspdk_ublk.so.3.0 00:02:05.429 SYMLINK libspdk_ublk.so 00:02:05.429 CC lib/iscsi/conn.o 00:02:05.429 CC lib/iscsi/init_grp.o 00:02:05.429 CC lib/iscsi/iscsi.o 00:02:05.429 CC lib/iscsi/md5.o 00:02:05.429 CC lib/iscsi/param.o 00:02:05.429 CC lib/iscsi/portal_grp.o 00:02:05.429 CC lib/iscsi/tgt_node.o 00:02:05.429 CC lib/iscsi/iscsi_subsystem.o 00:02:05.429 CC lib/iscsi/iscsi_rpc.o 00:02:05.429 CC lib/iscsi/task.o 00:02:05.429 LIB libspdk_ftl.a 00:02:05.429 CC lib/vhost/vhost.o 00:02:05.429 CC lib/vhost/vhost_scsi.o 00:02:05.429 CC lib/vhost/vhost_rpc.o 00:02:05.429 CC lib/vhost/vhost_blk.o 00:02:05.689 CC lib/vhost/rte_vhost_user.o 00:02:05.689 SO libspdk_ftl.so.9.0 00:02:05.950 SYMLINK libspdk_ftl.so 00:02:06.211 LIB libspdk_nvmf.a 00:02:06.472 SO libspdk_nvmf.so.18.1 00:02:06.472 LIB libspdk_vhost.a 00:02:06.472 SO libspdk_vhost.so.8.0 00:02:06.733 SYMLINK libspdk_nvmf.so 00:02:06.733 SYMLINK libspdk_vhost.so 00:02:06.733 LIB libspdk_iscsi.a 00:02:06.733 SO libspdk_iscsi.so.8.0 00:02:06.994 SYMLINK libspdk_iscsi.so 00:02:07.566 CC module/env_dpdk/env_dpdk_rpc.o 00:02:07.566 CC module/vfu_device/vfu_virtio.o 00:02:07.566 CC module/vfu_device/vfu_virtio_blk.o 00:02:07.566 CC module/vfu_device/vfu_virtio_scsi.o 00:02:07.566 CC module/vfu_device/vfu_virtio_rpc.o 00:02:07.566 LIB libspdk_env_dpdk_rpc.a 00:02:07.566 CC module/keyring/linux/keyring.o 00:02:07.566 CC module/keyring/linux/keyring_rpc.o 00:02:07.566 CC module/accel/error/accel_error.o 00:02:07.566 CC module/keyring/file/keyring.o 00:02:07.567 CC module/accel/error/accel_error_rpc.o 00:02:07.567 CC module/keyring/file/keyring_rpc.o 00:02:07.567 CC module/sock/posix/posix.o 00:02:07.567 CC module/accel/ioat/accel_ioat.o 00:02:07.567 CC module/accel/ioat/accel_ioat_rpc.o 00:02:07.567 CC module/blob/bdev/blob_bdev.o 00:02:07.567 CC module/accel/iaa/accel_iaa.o 00:02:07.567 CC module/accel/iaa/accel_iaa_rpc.o 00:02:07.567 CC module/scheduler/gscheduler/gscheduler.o 00:02:07.567 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:07.567 CC module/accel/dsa/accel_dsa.o 00:02:07.567 CC module/accel/dsa/accel_dsa_rpc.o 00:02:07.567 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:07.567 SO libspdk_env_dpdk_rpc.so.6.0 00:02:07.827 SYMLINK libspdk_env_dpdk_rpc.so 00:02:07.827 LIB libspdk_keyring_linux.a 00:02:07.827 LIB libspdk_keyring_file.a 00:02:07.827 LIB libspdk_scheduler_gscheduler.a 00:02:07.827 SO libspdk_keyring_linux.so.1.0 00:02:07.827 SO libspdk_keyring_file.so.1.0 00:02:07.827 LIB libspdk_scheduler_dynamic.a 00:02:07.827 LIB libspdk_accel_error.a 00:02:07.827 LIB libspdk_scheduler_dpdk_governor.a 00:02:07.827 LIB libspdk_accel_ioat.a 00:02:07.827 SO libspdk_scheduler_gscheduler.so.4.0 00:02:07.827 SO libspdk_scheduler_dynamic.so.4.0 00:02:07.827 LIB libspdk_accel_iaa.a 00:02:07.827 SO libspdk_accel_error.so.2.0 00:02:07.827 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:07.827 SYMLINK libspdk_keyring_linux.so 00:02:07.827 SO libspdk_accel_ioat.so.6.0 00:02:07.827 SYMLINK libspdk_keyring_file.so 00:02:07.827 LIB libspdk_accel_dsa.a 00:02:07.827 SYMLINK libspdk_scheduler_gscheduler.so 00:02:07.827 SO libspdk_accel_iaa.so.3.0 00:02:07.827 LIB libspdk_blob_bdev.a 00:02:07.827 SYMLINK libspdk_scheduler_dynamic.so 00:02:07.827 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:07.827 SO libspdk_accel_dsa.so.5.0 00:02:07.827 SYMLINK libspdk_accel_error.so 00:02:08.088 SYMLINK libspdk_accel_ioat.so 00:02:08.088 SO libspdk_blob_bdev.so.11.0 00:02:08.088 SYMLINK libspdk_accel_iaa.so 00:02:08.088 SYMLINK libspdk_accel_dsa.so 00:02:08.088 SYMLINK libspdk_blob_bdev.so 00:02:08.088 LIB libspdk_vfu_device.a 00:02:08.088 SO libspdk_vfu_device.so.3.0 00:02:08.088 SYMLINK libspdk_vfu_device.so 00:02:08.349 LIB libspdk_sock_posix.a 00:02:08.349 SO libspdk_sock_posix.so.6.0 00:02:08.349 SYMLINK libspdk_sock_posix.so 00:02:08.610 CC module/bdev/raid/bdev_raid.o 00:02:08.610 CC module/bdev/raid/bdev_raid_rpc.o 00:02:08.610 CC module/bdev/raid/bdev_raid_sb.o 00:02:08.610 CC module/bdev/delay/vbdev_delay.o 00:02:08.610 CC module/bdev/raid/raid0.o 00:02:08.610 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:08.610 CC module/bdev/raid/raid1.o 00:02:08.610 CC module/bdev/raid/concat.o 00:02:08.610 CC module/bdev/nvme/bdev_nvme.o 00:02:08.610 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.610 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.610 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:08.610 CC module/bdev/split/vbdev_split.o 00:02:08.610 CC module/bdev/error/vbdev_error.o 00:02:08.610 CC module/bdev/nvme/nvme_rpc.o 00:02:08.610 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.610 CC module/bdev/split/vbdev_split_rpc.o 00:02:08.610 CC module/bdev/nvme/bdev_mdns_client.o 00:02:08.610 CC module/bdev/error/vbdev_error_rpc.o 00:02:08.610 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.610 CC module/bdev/nvme/vbdev_opal.o 00:02:08.610 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.610 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.610 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:08.610 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:08.610 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.610 CC module/bdev/ftl/bdev_ftl.o 00:02:08.610 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.610 CC module/bdev/null/bdev_null.o 00:02:08.610 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:08.610 CC module/bdev/lvol/vbdev_lvol.o 00:02:08.610 CC module/bdev/passthru/vbdev_passthru.o 00:02:08.610 CC module/bdev/malloc/bdev_malloc.o 00:02:08.610 CC module/bdev/gpt/gpt.o 00:02:08.610 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:08.610 CC module/bdev/null/bdev_null_rpc.o 00:02:08.610 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:08.610 CC module/bdev/gpt/vbdev_gpt.o 00:02:08.610 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.610 CC module/bdev/aio/bdev_aio.o 00:02:08.610 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.610 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.870 LIB libspdk_blobfs_bdev.a 00:02:08.870 LIB libspdk_bdev_split.a 00:02:08.870 SO libspdk_blobfs_bdev.so.6.0 00:02:08.870 SO libspdk_bdev_split.so.6.0 00:02:08.870 LIB libspdk_bdev_null.a 00:02:08.870 LIB libspdk_bdev_error.a 00:02:08.870 LIB libspdk_bdev_gpt.a 00:02:08.870 SYMLINK libspdk_bdev_split.so 00:02:08.870 LIB libspdk_bdev_passthru.a 00:02:09.131 SYMLINK libspdk_blobfs_bdev.so 00:02:09.131 SO libspdk_bdev_null.so.6.0 00:02:09.131 LIB libspdk_bdev_ftl.a 00:02:09.131 SO libspdk_bdev_error.so.6.0 00:02:09.131 LIB libspdk_bdev_zone_block.a 00:02:09.131 SO libspdk_bdev_ftl.so.6.0 00:02:09.131 SO libspdk_bdev_gpt.so.6.0 00:02:09.131 LIB libspdk_bdev_delay.a 00:02:09.131 SO libspdk_bdev_passthru.so.6.0 00:02:09.131 LIB libspdk_bdev_aio.a 00:02:09.131 SO libspdk_bdev_zone_block.so.6.0 00:02:09.131 LIB libspdk_bdev_malloc.a 00:02:09.131 SYMLINK libspdk_bdev_null.so 00:02:09.131 SYMLINK libspdk_bdev_error.so 00:02:09.131 LIB libspdk_bdev_iscsi.a 00:02:09.131 SO libspdk_bdev_aio.so.6.0 00:02:09.131 SO libspdk_bdev_delay.so.6.0 00:02:09.131 SYMLINK libspdk_bdev_gpt.so 00:02:09.131 SYMLINK libspdk_bdev_ftl.so 00:02:09.131 SO libspdk_bdev_malloc.so.6.0 00:02:09.131 SYMLINK libspdk_bdev_passthru.so 00:02:09.131 SO libspdk_bdev_iscsi.so.6.0 00:02:09.131 SYMLINK libspdk_bdev_zone_block.so 00:02:09.131 SYMLINK libspdk_bdev_aio.so 00:02:09.131 SYMLINK libspdk_bdev_delay.so 00:02:09.131 LIB libspdk_bdev_virtio.a 00:02:09.131 LIB libspdk_bdev_lvol.a 00:02:09.131 SYMLINK libspdk_bdev_malloc.so 00:02:09.131 SYMLINK libspdk_bdev_iscsi.so 00:02:09.131 SO libspdk_bdev_virtio.so.6.0 00:02:09.131 SO libspdk_bdev_lvol.so.6.0 00:02:09.392 SYMLINK libspdk_bdev_lvol.so 00:02:09.392 SYMLINK libspdk_bdev_virtio.so 00:02:09.652 LIB libspdk_bdev_raid.a 00:02:09.652 SO libspdk_bdev_raid.so.6.0 00:02:09.652 SYMLINK libspdk_bdev_raid.so 00:02:10.593 LIB libspdk_bdev_nvme.a 00:02:10.593 SO libspdk_bdev_nvme.so.7.0 00:02:10.855 SYMLINK libspdk_bdev_nvme.so 00:02:11.430 CC module/event/subsystems/iobuf/iobuf.o 00:02:11.430 CC module/event/subsystems/sock/sock.o 00:02:11.430 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:11.430 CC module/event/subsystems/vmd/vmd.o 00:02:11.430 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:11.430 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:11.430 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:11.430 CC module/event/subsystems/scheduler/scheduler.o 00:02:11.430 CC module/event/subsystems/keyring/keyring.o 00:02:11.691 LIB libspdk_event_vfu_tgt.a 00:02:11.691 LIB libspdk_event_keyring.a 00:02:11.691 LIB libspdk_event_sock.a 00:02:11.691 LIB libspdk_event_vmd.a 00:02:11.691 LIB libspdk_event_vhost_blk.a 00:02:11.691 LIB libspdk_event_iobuf.a 00:02:11.691 LIB libspdk_event_scheduler.a 00:02:11.691 SO libspdk_event_keyring.so.1.0 00:02:11.691 SO libspdk_event_vfu_tgt.so.3.0 00:02:11.691 SO libspdk_event_sock.so.5.0 00:02:11.691 SO libspdk_event_vmd.so.6.0 00:02:11.691 SO libspdk_event_scheduler.so.4.0 00:02:11.691 SO libspdk_event_vhost_blk.so.3.0 00:02:11.691 SO libspdk_event_iobuf.so.3.0 00:02:11.691 SYMLINK libspdk_event_keyring.so 00:02:11.691 SYMLINK libspdk_event_vfu_tgt.so 00:02:11.691 SYMLINK libspdk_event_sock.so 00:02:11.691 SYMLINK libspdk_event_scheduler.so 00:02:11.691 SYMLINK libspdk_event_vmd.so 00:02:11.691 SYMLINK libspdk_event_vhost_blk.so 00:02:11.692 SYMLINK libspdk_event_iobuf.so 00:02:12.362 CC module/event/subsystems/accel/accel.o 00:02:12.362 LIB libspdk_event_accel.a 00:02:12.362 SO libspdk_event_accel.so.6.0 00:02:12.362 SYMLINK libspdk_event_accel.so 00:02:12.634 CC module/event/subsystems/bdev/bdev.o 00:02:12.894 LIB libspdk_event_bdev.a 00:02:12.894 SO libspdk_event_bdev.so.6.0 00:02:12.894 SYMLINK libspdk_event_bdev.so 00:02:13.467 CC module/event/subsystems/scsi/scsi.o 00:02:13.467 CC module/event/subsystems/nbd/nbd.o 00:02:13.467 CC module/event/subsystems/ublk/ublk.o 00:02:13.467 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:13.467 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:13.467 LIB libspdk_event_nbd.a 00:02:13.467 LIB libspdk_event_ublk.a 00:02:13.467 LIB libspdk_event_scsi.a 00:02:13.467 SO libspdk_event_nbd.so.6.0 00:02:13.467 SO libspdk_event_ublk.so.3.0 00:02:13.467 LIB libspdk_event_nvmf.a 00:02:13.467 SO libspdk_event_scsi.so.6.0 00:02:13.727 SYMLINK libspdk_event_nbd.so 00:02:13.727 SO libspdk_event_nvmf.so.6.0 00:02:13.727 SYMLINK libspdk_event_ublk.so 00:02:13.727 SYMLINK libspdk_event_scsi.so 00:02:13.727 SYMLINK libspdk_event_nvmf.so 00:02:13.988 CC module/event/subsystems/iscsi/iscsi.o 00:02:13.988 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:14.249 LIB libspdk_event_vhost_scsi.a 00:02:14.249 LIB libspdk_event_iscsi.a 00:02:14.249 SO libspdk_event_vhost_scsi.so.3.0 00:02:14.249 SO libspdk_event_iscsi.so.6.0 00:02:14.249 SYMLINK libspdk_event_vhost_scsi.so 00:02:14.249 SYMLINK libspdk_event_iscsi.so 00:02:14.510 SO libspdk.so.6.0 00:02:14.510 SYMLINK libspdk.so 00:02:14.770 CC app/spdk_top/spdk_top.o 00:02:14.770 CC app/spdk_nvme_identify/identify.o 00:02:14.770 CC app/trace_record/trace_record.o 00:02:14.770 CC app/spdk_lspci/spdk_lspci.o 00:02:14.770 CC app/spdk_nvme_perf/perf.o 00:02:14.770 CXX app/trace/trace.o 00:02:14.770 TEST_HEADER include/spdk/accel.h 00:02:14.770 TEST_HEADER include/spdk/assert.h 00:02:15.035 TEST_HEADER include/spdk/barrier.h 00:02:15.035 TEST_HEADER include/spdk/accel_module.h 00:02:15.035 TEST_HEADER include/spdk/base64.h 00:02:15.035 CC test/rpc_client/rpc_client_test.o 00:02:15.035 TEST_HEADER include/spdk/bdev.h 00:02:15.035 TEST_HEADER include/spdk/bdev_module.h 00:02:15.035 TEST_HEADER include/spdk/bit_array.h 00:02:15.035 TEST_HEADER include/spdk/bdev_zone.h 00:02:15.035 TEST_HEADER include/spdk/bit_pool.h 00:02:15.035 TEST_HEADER include/spdk/blob_bdev.h 00:02:15.035 CC app/spdk_nvme_discover/discovery_aer.o 00:02:15.035 TEST_HEADER include/spdk/blobfs.h 00:02:15.035 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:15.035 CC app/vhost/vhost.o 00:02:15.035 TEST_HEADER include/spdk/config.h 00:02:15.035 TEST_HEADER include/spdk/blob.h 00:02:15.035 TEST_HEADER include/spdk/conf.h 00:02:15.035 TEST_HEADER include/spdk/cpuset.h 00:02:15.035 TEST_HEADER include/spdk/crc16.h 00:02:15.035 CC app/iscsi_tgt/iscsi_tgt.o 00:02:15.035 TEST_HEADER include/spdk/crc32.h 00:02:15.035 TEST_HEADER include/spdk/dif.h 00:02:15.035 TEST_HEADER include/spdk/crc64.h 00:02:15.035 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:15.035 TEST_HEADER include/spdk/dma.h 00:02:15.035 TEST_HEADER include/spdk/endian.h 00:02:15.035 TEST_HEADER include/spdk/env_dpdk.h 00:02:15.035 TEST_HEADER include/spdk/event.h 00:02:15.035 CC app/spdk_dd/spdk_dd.o 00:02:15.035 TEST_HEADER include/spdk/fd_group.h 00:02:15.035 TEST_HEADER include/spdk/env.h 00:02:15.035 TEST_HEADER include/spdk/file.h 00:02:15.035 TEST_HEADER include/spdk/ftl.h 00:02:15.035 TEST_HEADER include/spdk/fd.h 00:02:15.035 TEST_HEADER include/spdk/gpt_spec.h 00:02:15.035 TEST_HEADER include/spdk/hexlify.h 00:02:15.035 TEST_HEADER include/spdk/histogram_data.h 00:02:15.035 TEST_HEADER include/spdk/idxd.h 00:02:15.035 TEST_HEADER include/spdk/idxd_spec.h 00:02:15.035 TEST_HEADER include/spdk/init.h 00:02:15.035 TEST_HEADER include/spdk/ioat_spec.h 00:02:15.035 TEST_HEADER include/spdk/json.h 00:02:15.035 TEST_HEADER include/spdk/ioat.h 00:02:15.035 TEST_HEADER include/spdk/jsonrpc.h 00:02:15.035 TEST_HEADER include/spdk/iscsi_spec.h 00:02:15.035 TEST_HEADER include/spdk/keyring_module.h 00:02:15.035 TEST_HEADER include/spdk/keyring.h 00:02:15.035 TEST_HEADER include/spdk/likely.h 00:02:15.035 CC app/spdk_tgt/spdk_tgt.o 00:02:15.035 TEST_HEADER include/spdk/log.h 00:02:15.035 CC app/nvmf_tgt/nvmf_main.o 00:02:15.035 TEST_HEADER include/spdk/lvol.h 00:02:15.035 TEST_HEADER include/spdk/mmio.h 00:02:15.035 TEST_HEADER include/spdk/memory.h 00:02:15.035 TEST_HEADER include/spdk/notify.h 00:02:15.035 TEST_HEADER include/spdk/nbd.h 00:02:15.035 TEST_HEADER include/spdk/nvme.h 00:02:15.035 TEST_HEADER include/spdk/nvme_intel.h 00:02:15.035 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:15.036 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:15.036 TEST_HEADER include/spdk/nvme_spec.h 00:02:15.036 TEST_HEADER include/spdk/nvme_zns.h 00:02:15.036 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:15.036 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:15.036 TEST_HEADER include/spdk/nvmf.h 00:02:15.036 TEST_HEADER include/spdk/nvmf_spec.h 00:02:15.036 TEST_HEADER include/spdk/nvmf_transport.h 00:02:15.036 TEST_HEADER include/spdk/pipe.h 00:02:15.036 TEST_HEADER include/spdk/opal_spec.h 00:02:15.036 TEST_HEADER include/spdk/pci_ids.h 00:02:15.036 TEST_HEADER include/spdk/opal.h 00:02:15.036 TEST_HEADER include/spdk/queue.h 00:02:15.036 TEST_HEADER include/spdk/reduce.h 00:02:15.036 TEST_HEADER include/spdk/rpc.h 00:02:15.036 TEST_HEADER include/spdk/scheduler.h 00:02:15.036 TEST_HEADER include/spdk/scsi.h 00:02:15.036 TEST_HEADER include/spdk/scsi_spec.h 00:02:15.036 TEST_HEADER include/spdk/sock.h 00:02:15.036 TEST_HEADER include/spdk/string.h 00:02:15.036 TEST_HEADER include/spdk/stdinc.h 00:02:15.036 TEST_HEADER include/spdk/trace.h 00:02:15.036 TEST_HEADER include/spdk/thread.h 00:02:15.036 TEST_HEADER include/spdk/trace_parser.h 00:02:15.036 TEST_HEADER include/spdk/ublk.h 00:02:15.036 TEST_HEADER include/spdk/tree.h 00:02:15.036 TEST_HEADER include/spdk/util.h 00:02:15.036 TEST_HEADER include/spdk/uuid.h 00:02:15.036 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:15.036 TEST_HEADER include/spdk/version.h 00:02:15.036 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:15.036 TEST_HEADER include/spdk/vhost.h 00:02:15.036 TEST_HEADER include/spdk/vmd.h 00:02:15.036 TEST_HEADER include/spdk/xor.h 00:02:15.036 TEST_HEADER include/spdk/zipf.h 00:02:15.036 CXX test/cpp_headers/accel_module.o 00:02:15.036 CXX test/cpp_headers/accel.o 00:02:15.036 CXX test/cpp_headers/assert.o 00:02:15.036 CXX test/cpp_headers/barrier.o 00:02:15.036 CXX test/cpp_headers/base64.o 00:02:15.036 CXX test/cpp_headers/bdev.o 00:02:15.036 CXX test/cpp_headers/bdev_zone.o 00:02:15.036 CXX test/cpp_headers/bit_array.o 00:02:15.036 CXX test/cpp_headers/bdev_module.o 00:02:15.036 CXX test/cpp_headers/bit_pool.o 00:02:15.036 CXX test/cpp_headers/blob_bdev.o 00:02:15.036 CXX test/cpp_headers/blobfs_bdev.o 00:02:15.036 CXX test/cpp_headers/blobfs.o 00:02:15.036 CXX test/cpp_headers/blob.o 00:02:15.036 CXX test/cpp_headers/cpuset.o 00:02:15.036 CXX test/cpp_headers/conf.o 00:02:15.036 CXX test/cpp_headers/crc16.o 00:02:15.036 CXX test/cpp_headers/config.o 00:02:15.036 CXX test/cpp_headers/dif.o 00:02:15.036 CXX test/cpp_headers/crc32.o 00:02:15.036 CXX test/cpp_headers/crc64.o 00:02:15.036 CXX test/cpp_headers/env.o 00:02:15.036 CXX test/cpp_headers/endian.o 00:02:15.036 CXX test/cpp_headers/dma.o 00:02:15.036 CXX test/cpp_headers/event.o 00:02:15.036 CXX test/cpp_headers/env_dpdk.o 00:02:15.036 CXX test/cpp_headers/fd_group.o 00:02:15.036 CXX test/cpp_headers/file.o 00:02:15.036 CXX test/cpp_headers/fd.o 00:02:15.036 CXX test/cpp_headers/gpt_spec.o 00:02:15.036 CXX test/cpp_headers/ftl.o 00:02:15.036 CXX test/cpp_headers/hexlify.o 00:02:15.036 CXX test/cpp_headers/histogram_data.o 00:02:15.036 CXX test/cpp_headers/init.o 00:02:15.036 CXX test/cpp_headers/idxd.o 00:02:15.036 CXX test/cpp_headers/ioat.o 00:02:15.036 CXX test/cpp_headers/idxd_spec.o 00:02:15.036 CXX test/cpp_headers/ioat_spec.o 00:02:15.036 CXX test/cpp_headers/json.o 00:02:15.036 CXX test/cpp_headers/iscsi_spec.o 00:02:15.036 CXX test/cpp_headers/jsonrpc.o 00:02:15.036 CXX test/cpp_headers/keyring.o 00:02:15.036 CXX test/cpp_headers/keyring_module.o 00:02:15.036 CXX test/cpp_headers/log.o 00:02:15.036 CXX test/cpp_headers/likely.o 00:02:15.036 CXX test/cpp_headers/lvol.o 00:02:15.036 CXX test/cpp_headers/memory.o 00:02:15.036 CXX test/cpp_headers/mmio.o 00:02:15.036 CXX test/cpp_headers/nbd.o 00:02:15.036 CXX test/cpp_headers/notify.o 00:02:15.036 CXX test/cpp_headers/nvme_intel.o 00:02:15.036 CXX test/cpp_headers/nvme.o 00:02:15.036 CXX test/cpp_headers/nvme_ocssd.o 00:02:15.036 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:15.036 CXX test/cpp_headers/nvme_spec.o 00:02:15.036 CXX test/cpp_headers/nvme_zns.o 00:02:15.036 CXX test/cpp_headers/nvmf_cmd.o 00:02:15.036 CXX test/cpp_headers/nvmf_spec.o 00:02:15.036 CXX test/cpp_headers/nvmf.o 00:02:15.036 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:15.036 CXX test/cpp_headers/nvmf_transport.o 00:02:15.036 CXX test/cpp_headers/opal_spec.o 00:02:15.036 CXX test/cpp_headers/opal.o 00:02:15.036 CXX test/cpp_headers/pipe.o 00:02:15.036 CXX test/cpp_headers/pci_ids.o 00:02:15.036 CXX test/cpp_headers/queue.o 00:02:15.036 CXX test/cpp_headers/reduce.o 00:02:15.036 CXX test/cpp_headers/rpc.o 00:02:15.036 CXX test/cpp_headers/scheduler.o 00:02:15.036 CXX test/cpp_headers/scsi.o 00:02:15.036 CC examples/accel/perf/accel_perf.o 00:02:15.301 CC examples/ioat/perf/perf.o 00:02:15.301 CC test/nvme/e2edp/nvme_dp.o 00:02:15.301 CC test/nvme/aer/aer.o 00:02:15.301 CC test/nvme/reset/reset.o 00:02:15.301 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:15.301 CC examples/idxd/perf/perf.o 00:02:15.301 CC test/bdev/bdevio/bdevio.o 00:02:15.301 CC examples/vmd/lsvmd/lsvmd.o 00:02:15.301 CC examples/ioat/verify/verify.o 00:02:15.301 CC test/accel/dif/dif.o 00:02:15.301 CC test/event/event_perf/event_perf.o 00:02:15.301 CC test/nvme/startup/startup.o 00:02:15.301 CC test/nvme/sgl/sgl.o 00:02:15.301 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:15.301 CXX test/cpp_headers/scsi_spec.o 00:02:15.301 CC examples/nvme/arbitration/arbitration.o 00:02:15.301 CC examples/nvme/reconnect/reconnect.o 00:02:15.301 CC app/fio/nvme/fio_plugin.o 00:02:15.301 CC test/nvme/compliance/nvme_compliance.o 00:02:15.301 CC examples/sock/hello_world/hello_sock.o 00:02:15.301 CC test/thread/poller_perf/poller_perf.o 00:02:15.301 CC examples/nvme/hotplug/hotplug.o 00:02:15.301 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:15.301 CC examples/nvme/abort/abort.o 00:02:15.301 CC examples/vmd/led/led.o 00:02:15.301 CC examples/util/zipf/zipf.o 00:02:15.301 CC test/nvme/connect_stress/connect_stress.o 00:02:15.301 CC test/event/reactor_perf/reactor_perf.o 00:02:15.301 CC test/dma/test_dma/test_dma.o 00:02:15.301 CC examples/blob/hello_world/hello_blob.o 00:02:15.301 CC test/nvme/cuse/cuse.o 00:02:15.301 CC test/nvme/fused_ordering/fused_ordering.o 00:02:15.301 CC examples/nvme/hello_world/hello_world.o 00:02:15.301 CC test/nvme/simple_copy/simple_copy.o 00:02:15.301 CC test/app/stub/stub.o 00:02:15.301 CC test/env/memory/memory_ut.o 00:02:15.301 CC test/nvme/boot_partition/boot_partition.o 00:02:15.301 CC examples/bdev/hello_world/hello_bdev.o 00:02:15.301 CC test/nvme/overhead/overhead.o 00:02:15.301 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:15.301 CC test/nvme/err_injection/err_injection.o 00:02:15.301 CC test/env/pci/pci_ut.o 00:02:15.301 CC test/event/reactor/reactor.o 00:02:15.301 CC test/env/vtophys/vtophys.o 00:02:15.301 CC test/app/jsoncat/jsoncat.o 00:02:15.301 CC test/nvme/reserve/reserve.o 00:02:15.301 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:15.301 CC test/app/histogram_perf/histogram_perf.o 00:02:15.301 CC test/nvme/fdp/fdp.o 00:02:15.301 CC test/event/app_repeat/app_repeat.o 00:02:15.301 CC examples/blob/cli/blobcli.o 00:02:15.301 CC app/fio/bdev/fio_plugin.o 00:02:15.301 CC test/event/scheduler/scheduler.o 00:02:15.301 CC examples/nvmf/nvmf/nvmf.o 00:02:15.301 LINK spdk_lspci 00:02:15.301 CC test/blobfs/mkfs/mkfs.o 00:02:15.301 CC examples/thread/thread/thread_ex.o 00:02:15.301 CC examples/bdev/bdevperf/bdevperf.o 00:02:15.301 CC test/app/bdev_svc/bdev_svc.o 00:02:15.568 LINK spdk_trace_record 00:02:15.568 LINK rpc_client_test 00:02:15.568 LINK interrupt_tgt 00:02:15.568 LINK spdk_nvme_discover 00:02:15.568 LINK vhost 00:02:15.568 LINK nvmf_tgt 00:02:15.831 LINK iscsi_tgt 00:02:15.831 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:15.831 CC test/lvol/esnap/esnap.o 00:02:15.831 CC test/env/mem_callbacks/mem_callbacks.o 00:02:15.831 LINK spdk_tgt 00:02:15.831 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:15.831 LINK event_perf 00:02:15.831 LINK lsvmd 00:02:15.831 LINK poller_perf 00:02:16.093 LINK reactor 00:02:16.093 LINK zipf 00:02:16.093 LINK vtophys 00:02:16.093 LINK reactor_perf 00:02:16.093 LINK pmr_persistence 00:02:16.093 LINK ioat_perf 00:02:16.093 CXX test/cpp_headers/sock.o 00:02:16.093 CXX test/cpp_headers/stdinc.o 00:02:16.093 LINK env_dpdk_post_init 00:02:16.093 LINK cmb_copy 00:02:16.093 LINK startup 00:02:16.093 CXX test/cpp_headers/string.o 00:02:16.093 LINK histogram_perf 00:02:16.093 CXX test/cpp_headers/thread.o 00:02:16.093 LINK led 00:02:16.093 CXX test/cpp_headers/trace.o 00:02:16.093 LINK doorbell_aers 00:02:16.093 CXX test/cpp_headers/trace_parser.o 00:02:16.093 CXX test/cpp_headers/tree.o 00:02:16.093 LINK connect_stress 00:02:16.093 LINK app_repeat 00:02:16.093 CXX test/cpp_headers/ublk.o 00:02:16.093 LINK fused_ordering 00:02:16.093 LINK stub 00:02:16.093 LINK reset 00:02:16.093 LINK verify 00:02:16.093 LINK boot_partition 00:02:16.093 LINK jsoncat 00:02:16.093 CXX test/cpp_headers/uuid.o 00:02:16.093 CXX test/cpp_headers/util.o 00:02:16.093 CXX test/cpp_headers/version.o 00:02:16.093 CXX test/cpp_headers/vfio_user_pci.o 00:02:16.093 CXX test/cpp_headers/vfio_user_spec.o 00:02:16.093 CXX test/cpp_headers/vhost.o 00:02:16.093 LINK reserve 00:02:16.093 CXX test/cpp_headers/vmd.o 00:02:16.093 CXX test/cpp_headers/xor.o 00:02:16.093 CXX test/cpp_headers/zipf.o 00:02:16.093 LINK hello_world 00:02:16.093 LINK aer 00:02:16.093 LINK hello_blob 00:02:16.093 LINK bdev_svc 00:02:16.093 LINK spdk_dd 00:02:16.093 LINK hotplug 00:02:16.093 LINK simple_copy 00:02:16.093 LINK hello_sock 00:02:16.093 LINK mkfs 00:02:16.093 LINK err_injection 00:02:16.093 LINK sgl 00:02:16.093 LINK nvme_dp 00:02:16.093 LINK hello_bdev 00:02:16.093 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:16.093 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:16.093 LINK spdk_trace 00:02:16.093 LINK scheduler 00:02:16.093 LINK overhead 00:02:16.093 LINK arbitration 00:02:16.093 LINK thread 00:02:16.093 LINK pci_ut 00:02:16.093 LINK nvme_compliance 00:02:16.093 LINK idxd_perf 00:02:16.352 LINK bdevio 00:02:16.352 LINK fdp 00:02:16.352 LINK abort 00:02:16.352 LINK nvmf 00:02:16.352 LINK reconnect 00:02:16.352 LINK accel_perf 00:02:16.353 LINK test_dma 00:02:16.353 LINK nvme_manage 00:02:16.353 LINK dif 00:02:16.353 LINK spdk_nvme 00:02:16.353 LINK blobcli 00:02:16.614 LINK nvme_fuzz 00:02:16.614 LINK spdk_nvme_perf 00:02:16.614 LINK spdk_bdev 00:02:16.614 LINK spdk_top 00:02:16.614 LINK vhost_fuzz 00:02:16.614 LINK mem_callbacks 00:02:16.614 LINK spdk_nvme_identify 00:02:16.875 LINK bdevperf 00:02:16.875 LINK memory_ut 00:02:17.137 LINK cuse 00:02:17.398 LINK iscsi_fuzz 00:02:19.944 LINK esnap 00:02:20.517 00:02:20.517 real 0m50.537s 00:02:20.517 user 6m44.897s 00:02:20.517 sys 5m15.017s 00:02:20.517 10:27:44 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:20.517 10:27:44 make -- common/autotest_common.sh@10 -- $ set +x 00:02:20.517 ************************************ 00:02:20.517 END TEST make 00:02:20.517 ************************************ 00:02:20.517 10:27:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:20.517 10:27:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:20.517 10:27:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:20.517 10:27:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.517 10:27:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:20.518 10:27:44 -- pm/common@44 -- $ pid=491895 00:02:20.518 10:27:44 -- pm/common@50 -- $ kill -TERM 491895 00:02:20.518 10:27:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.518 10:27:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:20.518 10:27:44 -- pm/common@44 -- $ pid=491896 00:02:20.518 10:27:44 -- pm/common@50 -- $ kill -TERM 491896 00:02:20.518 10:27:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.518 10:27:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:20.518 10:27:44 -- pm/common@44 -- $ pid=491898 00:02:20.518 10:27:44 -- pm/common@50 -- $ kill -TERM 491898 00:02:20.518 10:27:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.518 10:27:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:20.518 10:27:44 -- pm/common@44 -- $ pid=491924 00:02:20.518 10:27:44 -- pm/common@50 -- $ sudo -E kill -TERM 491924 00:02:20.518 10:27:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:20.518 10:27:44 -- nvmf/common.sh@7 -- # uname -s 00:02:20.518 10:27:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:20.518 10:27:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:20.518 10:27:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:20.518 10:27:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:20.518 10:27:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:20.518 10:27:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:20.518 10:27:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:20.518 10:27:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:20.518 10:27:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:20.518 10:27:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:20.518 10:27:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:20.518 10:27:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:20.518 10:27:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:20.518 10:27:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:20.518 10:27:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:20.518 10:27:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:20.518 10:27:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:20.518 10:27:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:20.518 10:27:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.518 10:27:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.518 10:27:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.518 10:27:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.518 10:27:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.518 10:27:44 -- paths/export.sh@5 -- # export PATH 00:02:20.518 10:27:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.518 10:27:44 -- nvmf/common.sh@47 -- # : 0 00:02:20.518 10:27:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:20.518 10:27:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:20.518 10:27:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:20.518 10:27:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:20.518 10:27:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:20.518 10:27:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:20.518 10:27:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:20.518 10:27:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:20.518 10:27:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:20.518 10:27:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:20.518 10:27:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:20.518 10:27:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:20.518 10:27:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:20.518 10:27:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:20.518 10:27:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:20.518 10:27:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:20.518 10:27:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:20.518 10:27:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:20.518 10:27:44 -- spdk/autotest.sh@48 -- # udevadm_pid=555188 00:02:20.518 10:27:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:20.518 10:27:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:20.518 10:27:44 -- pm/common@17 -- # local monitor 00:02:20.518 10:27:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.518 10:27:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.518 10:27:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.518 10:27:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.518 10:27:44 -- pm/common@21 -- # date +%s 00:02:20.518 10:27:44 -- pm/common@21 -- # date +%s 00:02:20.518 10:27:44 -- pm/common@25 -- # sleep 1 00:02:20.518 10:27:44 -- pm/common@21 -- # date +%s 00:02:20.518 10:27:44 -- pm/common@21 -- # date +%s 00:02:20.518 10:27:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718008064 00:02:20.518 10:27:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718008064 00:02:20.779 10:27:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718008064 00:02:20.779 10:27:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718008064 00:02:20.779 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718008064_collect-vmstat.pm.log 00:02:20.779 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718008064_collect-cpu-load.pm.log 00:02:20.779 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718008064_collect-cpu-temp.pm.log 00:02:20.779 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718008064_collect-bmc-pm.bmc.pm.log 00:02:21.722 10:27:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:21.722 10:27:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:21.722 10:27:45 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:21.722 10:27:45 -- common/autotest_common.sh@10 -- # set +x 00:02:21.722 10:27:45 -- spdk/autotest.sh@59 -- # create_test_list 00:02:21.722 10:27:45 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:21.722 10:27:45 -- common/autotest_common.sh@10 -- # set +x 00:02:21.722 10:27:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:21.722 10:27:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:21.722 10:27:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:21.722 10:27:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:21.722 10:27:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:21.722 10:27:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:21.722 10:27:45 -- common/autotest_common.sh@1454 -- # uname 00:02:21.722 10:27:45 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:21.722 10:27:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:21.722 10:27:45 -- common/autotest_common.sh@1474 -- # uname 00:02:21.722 10:27:45 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:21.722 10:27:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:21.722 10:27:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:21.722 10:27:45 -- spdk/autotest.sh@72 -- # hash lcov 00:02:21.723 10:27:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:21.723 10:27:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:21.723 --rc lcov_branch_coverage=1 00:02:21.723 --rc lcov_function_coverage=1 00:02:21.723 --rc genhtml_branch_coverage=1 00:02:21.723 --rc genhtml_function_coverage=1 00:02:21.723 --rc genhtml_legend=1 00:02:21.723 --rc geninfo_all_blocks=1 00:02:21.723 ' 00:02:21.723 10:27:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:21.723 --rc lcov_branch_coverage=1 00:02:21.723 --rc lcov_function_coverage=1 00:02:21.723 --rc genhtml_branch_coverage=1 00:02:21.723 --rc genhtml_function_coverage=1 00:02:21.723 --rc genhtml_legend=1 00:02:21.723 --rc geninfo_all_blocks=1 00:02:21.723 ' 00:02:21.723 10:27:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:21.723 --rc lcov_branch_coverage=1 00:02:21.723 --rc lcov_function_coverage=1 00:02:21.723 --rc genhtml_branch_coverage=1 00:02:21.723 --rc genhtml_function_coverage=1 00:02:21.723 --rc genhtml_legend=1 00:02:21.723 --rc geninfo_all_blocks=1 00:02:21.723 --no-external' 00:02:21.723 10:27:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:21.723 --rc lcov_branch_coverage=1 00:02:21.723 --rc lcov_function_coverage=1 00:02:21.723 --rc genhtml_branch_coverage=1 00:02:21.723 --rc genhtml_function_coverage=1 00:02:21.723 --rc genhtml_legend=1 00:02:21.723 --rc geninfo_all_blocks=1 00:02:21.723 --no-external' 00:02:21.723 10:27:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:21.723 lcov: LCOV version 1.14 00:02:21.723 10:27:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:33.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:33.953 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:48.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:48.853 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:48.854 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:48.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:48.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:48.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:48.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:48.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:48.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:48.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:48.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:48.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:48.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:48.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:48.855 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:50.765 10:28:14 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:50.765 10:28:14 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:50.765 10:28:14 -- common/autotest_common.sh@10 -- # set +x 00:02:50.765 10:28:14 -- spdk/autotest.sh@91 -- # rm -f 00:02:50.765 10:28:14 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.067 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:54.067 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:54.067 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:54.328 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:54.328 10:28:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:54.328 10:28:18 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:54.328 10:28:18 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:54.328 10:28:18 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:54.328 10:28:18 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:54.328 10:28:18 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:54.328 10:28:18 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:54.328 10:28:18 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.328 10:28:18 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:54.328 10:28:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:54.328 10:28:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:54.328 10:28:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:54.328 10:28:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:54.328 10:28:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:54.328 10:28:18 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:54.328 No valid GPT data, bailing 00:02:54.328 10:28:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:54.328 10:28:18 -- scripts/common.sh@391 -- # pt= 00:02:54.328 10:28:18 -- scripts/common.sh@392 -- # return 1 00:02:54.328 10:28:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:54.328 1+0 records in 00:02:54.328 1+0 records out 00:02:54.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379085 s, 277 MB/s 00:02:54.328 10:28:18 -- spdk/autotest.sh@118 -- # sync 00:02:54.328 10:28:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:54.328 10:28:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:54.328 10:28:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:02.469 10:28:26 -- spdk/autotest.sh@124 -- # uname -s 00:03:02.469 10:28:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:02.469 10:28:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.469 10:28:26 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:02.469 10:28:26 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:02.469 10:28:26 -- common/autotest_common.sh@10 -- # set +x 00:03:02.469 ************************************ 00:03:02.469 START TEST setup.sh 00:03:02.469 ************************************ 00:03:02.469 10:28:26 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.469 * Looking for test storage... 00:03:02.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.469 10:28:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:02.469 10:28:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:02.469 10:28:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:02.469 10:28:26 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:02.469 10:28:26 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:02.469 10:28:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:02.469 ************************************ 00:03:02.469 START TEST acl 00:03:02.469 ************************************ 00:03:02.469 10:28:26 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:02.469 * Looking for test storage... 00:03:02.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.469 10:28:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:02.469 10:28:26 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:02.469 10:28:26 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:02.469 10:28:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:02.469 10:28:26 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:02.469 10:28:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:02.469 10:28:26 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:02.469 10:28:26 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.469 10:28:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:02.469 10:28:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:02.469 10:28:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:02.469 10:28:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:02.469 10:28:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:02.469 10:28:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:02.469 10:28:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.469 10:28:26 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.675 10:28:30 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:06.675 10:28:30 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:06.675 10:28:30 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:06.675 10:28:30 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:06.675 10:28:30 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.675 10:28:30 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:09.976 Hugepages 00:03:09.976 node hugesize free / total 00:03:09.976 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 00:03:09.977 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:09.977 10:28:33 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:09.977 10:28:33 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:09.977 10:28:33 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:09.977 10:28:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:09.977 ************************************ 00:03:09.977 START TEST denied 00:03:09.977 ************************************ 00:03:09.977 10:28:33 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:09.977 10:28:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:09.977 10:28:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:09.977 10:28:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:09.977 10:28:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.977 10:28:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.404 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.404 10:28:37 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.691 00:03:18.691 real 0m8.094s 00:03:18.691 user 0m2.732s 00:03:18.691 sys 0m4.693s 00:03:18.691 10:28:42 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:18.691 10:28:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:18.691 ************************************ 00:03:18.691 END TEST denied 00:03:18.691 ************************************ 00:03:18.691 10:28:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:18.691 10:28:42 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:18.691 10:28:42 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:18.691 10:28:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.691 ************************************ 00:03:18.691 START TEST allowed 00:03:18.691 ************************************ 00:03:18.691 10:28:42 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:18.691 10:28:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:18.691 10:28:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:18.691 10:28:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:18.691 10:28:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.691 10:28:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:23.980 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:23.980 10:28:47 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:23.980 10:28:47 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:23.980 10:28:47 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:23.980 10:28:47 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.980 10:28:47 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.283 00:03:27.283 real 0m9.153s 00:03:27.283 user 0m2.664s 00:03:27.283 sys 0m4.809s 00:03:27.283 10:28:51 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.283 10:28:51 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:27.283 ************************************ 00:03:27.283 END TEST allowed 00:03:27.283 ************************************ 00:03:27.283 00:03:27.283 real 0m24.846s 00:03:27.283 user 0m8.246s 00:03:27.283 sys 0m14.459s 00:03:27.283 10:28:51 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.283 10:28:51 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:27.283 ************************************ 00:03:27.283 END TEST acl 00:03:27.283 ************************************ 00:03:27.283 10:28:51 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:27.283 10:28:51 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.283 10:28:51 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.283 10:28:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:27.283 ************************************ 00:03:27.283 START TEST hugepages 00:03:27.283 ************************************ 00:03:27.283 10:28:51 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:27.283 * Looking for test storage... 00:03:27.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:27.283 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:27.283 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:27.283 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:27.283 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 107154236 kB' 'MemAvailable: 110497368 kB' 'Buffers: 4132 kB' 'Cached: 10253740 kB' 'SwapCached: 0 kB' 'Active: 7347276 kB' 'Inactive: 3525960 kB' 'Active(anon): 6856680 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 618728 kB' 'Mapped: 170664 kB' 'Shmem: 6241316 kB' 'KReclaimable: 301812 kB' 'Slab: 1145080 kB' 'SReclaimable: 301812 kB' 'SUnreclaim: 843268 kB' 'KernelStack: 27376 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460884 kB' 'Committed_AS: 8452988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235660 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.284 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.285 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:27.286 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:27.286 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.286 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:27.286 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:27.286 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:27.286 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:27.286 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:27.286 10:28:51 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:27.286 10:28:51 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.286 10:28:51 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.286 10:28:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.286 ************************************ 00:03:27.286 START TEST default_setup 00:03:27.286 ************************************ 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.286 10:28:51 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.588 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:30.588 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:30.854 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:30.854 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109319704 kB' 'MemAvailable: 112662820 kB' 'Buffers: 4132 kB' 'Cached: 10253856 kB' 'SwapCached: 0 kB' 'Active: 7365084 kB' 'Inactive: 3525960 kB' 'Active(anon): 6874488 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636504 kB' 'Mapped: 170944 kB' 'Shmem: 6241432 kB' 'KReclaimable: 301780 kB' 'Slab: 1142420 kB' 'SReclaimable: 301780 kB' 'SUnreclaim: 840640 kB' 'KernelStack: 27440 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8474216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235628 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.854 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.855 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109319832 kB' 'MemAvailable: 112662948 kB' 'Buffers: 4132 kB' 'Cached: 10253860 kB' 'SwapCached: 0 kB' 'Active: 7365356 kB' 'Inactive: 3525960 kB' 'Active(anon): 6874760 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636844 kB' 'Mapped: 170944 kB' 'Shmem: 6241436 kB' 'KReclaimable: 301780 kB' 'Slab: 1142420 kB' 'SReclaimable: 301780 kB' 'SUnreclaim: 840640 kB' 'KernelStack: 27424 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8474236 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235628 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.856 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.857 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109319076 kB' 'MemAvailable: 112662192 kB' 'Buffers: 4132 kB' 'Cached: 10253876 kB' 'SwapCached: 0 kB' 'Active: 7365420 kB' 'Inactive: 3525960 kB' 'Active(anon): 6874824 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636856 kB' 'Mapped: 170944 kB' 'Shmem: 6241452 kB' 'KReclaimable: 301780 kB' 'Slab: 1142420 kB' 'SReclaimable: 301780 kB' 'SUnreclaim: 840640 kB' 'KernelStack: 27424 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8474256 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235628 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.858 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.859 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.860 nr_hugepages=1024 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.860 resv_hugepages=0 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.860 surplus_hugepages=0 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.860 anon_hugepages=0 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109319076 kB' 'MemAvailable: 112662192 kB' 'Buffers: 4132 kB' 'Cached: 10253896 kB' 'SwapCached: 0 kB' 'Active: 7365388 kB' 'Inactive: 3525960 kB' 'Active(anon): 6874792 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636856 kB' 'Mapped: 170944 kB' 'Shmem: 6241472 kB' 'KReclaimable: 301780 kB' 'Slab: 1142420 kB' 'SReclaimable: 301780 kB' 'SUnreclaim: 840640 kB' 'KernelStack: 27424 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8474276 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235628 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:30.860 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.124 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:31.125 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52492388 kB' 'MemUsed: 13166620 kB' 'SwapCached: 0 kB' 'Active: 5531200 kB' 'Inactive: 3325532 kB' 'Active(anon): 5195744 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8690856 kB' 'Mapped: 123204 kB' 'AnonPages: 169152 kB' 'Shmem: 5029868 kB' 'KernelStack: 14040 kB' 'PageTables: 4876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176996 kB' 'Slab: 637444 kB' 'SReclaimable: 176996 kB' 'SUnreclaim: 460448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.126 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:31.127 node0=1024 expecting 1024 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:31.127 00:03:31.127 real 0m3.658s 00:03:31.127 user 0m1.328s 00:03:31.127 sys 0m2.301s 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:31.127 10:28:55 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:31.127 ************************************ 00:03:31.127 END TEST default_setup 00:03:31.127 ************************************ 00:03:31.127 10:28:55 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:31.127 10:28:55 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:31.127 10:28:55 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:31.127 10:28:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.127 ************************************ 00:03:31.127 START TEST per_node_1G_alloc 00:03:31.127 ************************************ 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.127 10:28:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.430 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:34.430 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.430 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109373428 kB' 'MemAvailable: 112716540 kB' 'Buffers: 4132 kB' 'Cached: 10254016 kB' 'SwapCached: 0 kB' 'Active: 7367296 kB' 'Inactive: 3525960 kB' 'Active(anon): 6876700 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638384 kB' 'Mapped: 170312 kB' 'Shmem: 6241592 kB' 'KReclaimable: 301772 kB' 'Slab: 1141764 kB' 'SReclaimable: 301772 kB' 'SUnreclaim: 839992 kB' 'KernelStack: 27376 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8465256 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235676 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.697 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.698 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109376924 kB' 'MemAvailable: 112720036 kB' 'Buffers: 4132 kB' 'Cached: 10254020 kB' 'SwapCached: 0 kB' 'Active: 7369172 kB' 'Inactive: 3525960 kB' 'Active(anon): 6878576 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640304 kB' 'Mapped: 170724 kB' 'Shmem: 6241596 kB' 'KReclaimable: 301772 kB' 'Slab: 1141764 kB' 'SReclaimable: 301772 kB' 'SUnreclaim: 839992 kB' 'KernelStack: 27360 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8466608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235632 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.699 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.700 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109377984 kB' 'MemAvailable: 112721096 kB' 'Buffers: 4132 kB' 'Cached: 10254020 kB' 'SwapCached: 0 kB' 'Active: 7363440 kB' 'Inactive: 3525960 kB' 'Active(anon): 6872844 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634588 kB' 'Mapped: 170168 kB' 'Shmem: 6241596 kB' 'KReclaimable: 301772 kB' 'Slab: 1141800 kB' 'SReclaimable: 301772 kB' 'SUnreclaim: 840028 kB' 'KernelStack: 27360 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8460508 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235644 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.701 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.702 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.703 nr_hugepages=1024 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.703 resv_hugepages=0 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.703 surplus_hugepages=0 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.703 anon_hugepages=0 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.703 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109378692 kB' 'MemAvailable: 112721804 kB' 'Buffers: 4132 kB' 'Cached: 10254060 kB' 'SwapCached: 0 kB' 'Active: 7363420 kB' 'Inactive: 3525960 kB' 'Active(anon): 6872824 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634536 kB' 'Mapped: 169756 kB' 'Shmem: 6241636 kB' 'KReclaimable: 301772 kB' 'Slab: 1141800 kB' 'SReclaimable: 301772 kB' 'SUnreclaim: 840028 kB' 'KernelStack: 27376 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8460532 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235628 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.704 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.705 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53548372 kB' 'MemUsed: 12110636 kB' 'SwapCached: 0 kB' 'Active: 5528080 kB' 'Inactive: 3325532 kB' 'Active(anon): 5192624 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8690968 kB' 'Mapped: 122708 kB' 'AnonPages: 165884 kB' 'Shmem: 5029980 kB' 'KernelStack: 13960 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176988 kB' 'Slab: 637292 kB' 'SReclaimable: 176988 kB' 'SUnreclaim: 460304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.706 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 55830940 kB' 'MemUsed: 4848916 kB' 'SwapCached: 0 kB' 'Active: 1835384 kB' 'Inactive: 200428 kB' 'Active(anon): 1680244 kB' 'Inactive(anon): 0 kB' 'Active(file): 155140 kB' 'Inactive(file): 200428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567248 kB' 'Mapped: 47048 kB' 'AnonPages: 468656 kB' 'Shmem: 1211680 kB' 'KernelStack: 13416 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124784 kB' 'Slab: 504500 kB' 'SReclaimable: 124784 kB' 'SUnreclaim: 379716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.707 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.708 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:34.709 node0=512 expecting 512 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:34.709 node1=512 expecting 512 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:34.709 00:03:34.709 real 0m3.644s 00:03:34.709 user 0m1.440s 00:03:34.709 sys 0m2.263s 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:34.709 10:28:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.709 ************************************ 00:03:34.709 END TEST per_node_1G_alloc 00:03:34.709 ************************************ 00:03:34.709 10:28:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:34.709 10:28:58 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:34.709 10:28:58 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:34.709 10:28:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.970 ************************************ 00:03:34.970 START TEST even_2G_alloc 00:03:34.970 ************************************ 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.970 10:28:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.278 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:38.278 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109374820 kB' 'MemAvailable: 112717916 kB' 'Buffers: 4132 kB' 'Cached: 10254200 kB' 'SwapCached: 0 kB' 'Active: 7363764 kB' 'Inactive: 3525960 kB' 'Active(anon): 6873168 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634624 kB' 'Mapped: 169856 kB' 'Shmem: 6241776 kB' 'KReclaimable: 301740 kB' 'Slab: 1142412 kB' 'SReclaimable: 301740 kB' 'SUnreclaim: 840672 kB' 'KernelStack: 27312 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8461296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235676 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.278 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.279 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109374996 kB' 'MemAvailable: 112718076 kB' 'Buffers: 4132 kB' 'Cached: 10254204 kB' 'SwapCached: 0 kB' 'Active: 7363644 kB' 'Inactive: 3525960 kB' 'Active(anon): 6873048 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634564 kB' 'Mapped: 169824 kB' 'Shmem: 6241780 kB' 'KReclaimable: 301708 kB' 'Slab: 1142364 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840656 kB' 'KernelStack: 27360 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8461316 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235660 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.280 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.281 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109375084 kB' 'MemAvailable: 112718164 kB' 'Buffers: 4132 kB' 'Cached: 10254204 kB' 'SwapCached: 0 kB' 'Active: 7363648 kB' 'Inactive: 3525960 kB' 'Active(anon): 6873052 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634568 kB' 'Mapped: 169824 kB' 'Shmem: 6241780 kB' 'KReclaimable: 301708 kB' 'Slab: 1142408 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840700 kB' 'KernelStack: 27376 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8461336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235660 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.282 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.283 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.284 nr_hugepages=1024 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.284 resv_hugepages=0 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.284 surplus_hugepages=0 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.284 anon_hugepages=0 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109375312 kB' 'MemAvailable: 112718392 kB' 'Buffers: 4132 kB' 'Cached: 10254244 kB' 'SwapCached: 0 kB' 'Active: 7363644 kB' 'Inactive: 3525960 kB' 'Active(anon): 6873048 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634536 kB' 'Mapped: 169824 kB' 'Shmem: 6241820 kB' 'KReclaimable: 301708 kB' 'Slab: 1142408 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840700 kB' 'KernelStack: 27360 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8461360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235676 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.284 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.285 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53531372 kB' 'MemUsed: 12127636 kB' 'SwapCached: 0 kB' 'Active: 5529768 kB' 'Inactive: 3325532 kB' 'Active(anon): 5194312 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8691108 kB' 'Mapped: 122736 kB' 'AnonPages: 167416 kB' 'Shmem: 5030120 kB' 'KernelStack: 13992 kB' 'PageTables: 4684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176924 kB' 'Slab: 637560 kB' 'SReclaimable: 176924 kB' 'SUnreclaim: 460636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.286 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.287 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.288 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.288 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.288 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.288 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 55843276 kB' 'MemUsed: 4836580 kB' 'SwapCached: 0 kB' 'Active: 1834496 kB' 'Inactive: 200428 kB' 'Active(anon): 1679356 kB' 'Inactive(anon): 0 kB' 'Active(file): 155140 kB' 'Inactive(file): 200428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567288 kB' 'Mapped: 47084 kB' 'AnonPages: 467748 kB' 'Shmem: 1211720 kB' 'KernelStack: 13384 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124784 kB' 'Slab: 504848 kB' 'SReclaimable: 124784 kB' 'SUnreclaim: 380064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.549 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.550 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:38.551 node0=512 expecting 512 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:38.551 node1=512 expecting 512 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:38.551 00:03:38.551 real 0m3.605s 00:03:38.551 user 0m1.481s 00:03:38.551 sys 0m2.182s 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:38.551 10:29:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.551 ************************************ 00:03:38.551 END TEST even_2G_alloc 00:03:38.551 ************************************ 00:03:38.551 10:29:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:38.551 10:29:02 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:38.551 10:29:02 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:38.551 10:29:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.551 ************************************ 00:03:38.551 START TEST odd_alloc 00:03:38.551 ************************************ 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.551 10:29:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.855 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:41.855 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109405248 kB' 'MemAvailable: 112748328 kB' 'Buffers: 4132 kB' 'Cached: 10254376 kB' 'SwapCached: 0 kB' 'Active: 7366432 kB' 'Inactive: 3525960 kB' 'Active(anon): 6875836 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636596 kB' 'Mapped: 169828 kB' 'Shmem: 6241952 kB' 'KReclaimable: 301708 kB' 'Slab: 1142180 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840472 kB' 'KernelStack: 27376 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8480840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235644 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.855 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.121 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.121 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.121 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.121 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.121 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.121 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.122 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109410540 kB' 'MemAvailable: 112753620 kB' 'Buffers: 4132 kB' 'Cached: 10254380 kB' 'SwapCached: 0 kB' 'Active: 7364532 kB' 'Inactive: 3525960 kB' 'Active(anon): 6873936 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635280 kB' 'Mapped: 169880 kB' 'Shmem: 6241956 kB' 'KReclaimable: 301708 kB' 'Slab: 1142120 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840412 kB' 'KernelStack: 27344 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8462056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235548 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.123 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109412284 kB' 'MemAvailable: 112755364 kB' 'Buffers: 4132 kB' 'Cached: 10254396 kB' 'SwapCached: 0 kB' 'Active: 7364288 kB' 'Inactive: 3525960 kB' 'Active(anon): 6873692 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634984 kB' 'Mapped: 169820 kB' 'Shmem: 6241972 kB' 'KReclaimable: 301708 kB' 'Slab: 1142140 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840432 kB' 'KernelStack: 27312 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8462084 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235548 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:42.124 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.125 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:42.126 nr_hugepages=1025 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.126 resv_hugepages=0 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.126 surplus_hugepages=0 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.126 anon_hugepages=0 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.126 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109411808 kB' 'MemAvailable: 112754888 kB' 'Buffers: 4132 kB' 'Cached: 10254436 kB' 'SwapCached: 0 kB' 'Active: 7363980 kB' 'Inactive: 3525960 kB' 'Active(anon): 6873384 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634600 kB' 'Mapped: 169820 kB' 'Shmem: 6242012 kB' 'KReclaimable: 301708 kB' 'Slab: 1142140 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840432 kB' 'KernelStack: 27296 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508436 kB' 'Committed_AS: 8462108 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235548 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.127 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53539380 kB' 'MemUsed: 12119628 kB' 'SwapCached: 0 kB' 'Active: 5529872 kB' 'Inactive: 3325532 kB' 'Active(anon): 5194416 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8691208 kB' 'Mapped: 122716 kB' 'AnonPages: 167268 kB' 'Shmem: 5030220 kB' 'KernelStack: 13912 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176924 kB' 'Slab: 637488 kB' 'SReclaimable: 176924 kB' 'SUnreclaim: 460564 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.128 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.129 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 55871312 kB' 'MemUsed: 4808544 kB' 'SwapCached: 0 kB' 'Active: 1834704 kB' 'Inactive: 200428 kB' 'Active(anon): 1679564 kB' 'Inactive(anon): 0 kB' 'Active(file): 155140 kB' 'Inactive(file): 200428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567380 kB' 'Mapped: 47104 kB' 'AnonPages: 467964 kB' 'Shmem: 1211812 kB' 'KernelStack: 13368 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124784 kB' 'Slab: 504620 kB' 'SReclaimable: 124784 kB' 'SUnreclaim: 379836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.130 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:42.131 node0=512 expecting 513 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:42.131 node1=513 expecting 512 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:42.131 00:03:42.131 real 0m3.662s 00:03:42.131 user 0m1.504s 00:03:42.131 sys 0m2.218s 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:42.131 10:29:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:42.131 ************************************ 00:03:42.131 END TEST odd_alloc 00:03:42.131 ************************************ 00:03:42.131 10:29:06 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:42.131 10:29:06 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:42.131 10:29:06 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:42.131 10:29:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.131 ************************************ 00:03:42.131 START TEST custom_alloc 00:03:42.131 ************************************ 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:42.131 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.132 10:29:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.343 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:46.343 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 108373812 kB' 'MemAvailable: 111716892 kB' 'Buffers: 4132 kB' 'Cached: 10254548 kB' 'SwapCached: 0 kB' 'Active: 7365696 kB' 'Inactive: 3525960 kB' 'Active(anon): 6875100 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636804 kB' 'Mapped: 169884 kB' 'Shmem: 6242124 kB' 'KReclaimable: 301708 kB' 'Slab: 1141752 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840044 kB' 'KernelStack: 27536 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8466052 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235772 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.343 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.344 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 108375408 kB' 'MemAvailable: 111718488 kB' 'Buffers: 4132 kB' 'Cached: 10254552 kB' 'SwapCached: 0 kB' 'Active: 7365720 kB' 'Inactive: 3525960 kB' 'Active(anon): 6875124 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636352 kB' 'Mapped: 169868 kB' 'Shmem: 6242128 kB' 'KReclaimable: 301708 kB' 'Slab: 1141728 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840020 kB' 'KernelStack: 27456 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8464360 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235644 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.345 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.346 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 108374216 kB' 'MemAvailable: 111717296 kB' 'Buffers: 4132 kB' 'Cached: 10254584 kB' 'SwapCached: 0 kB' 'Active: 7365956 kB' 'Inactive: 3525960 kB' 'Active(anon): 6875360 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636524 kB' 'Mapped: 169860 kB' 'Shmem: 6242160 kB' 'KReclaimable: 301708 kB' 'Slab: 1141784 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840076 kB' 'KernelStack: 27552 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8464884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235708 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.347 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.348 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:46.349 nr_hugepages=1536 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.349 resv_hugepages=0 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.349 surplus_hugepages=0 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.349 anon_hugepages=0 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 108373028 kB' 'MemAvailable: 111716108 kB' 'Buffers: 4132 kB' 'Cached: 10254608 kB' 'SwapCached: 0 kB' 'Active: 7366668 kB' 'Inactive: 3525960 kB' 'Active(anon): 6876072 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637284 kB' 'Mapped: 169868 kB' 'Shmem: 6242184 kB' 'KReclaimable: 301708 kB' 'Slab: 1141784 kB' 'SReclaimable: 301708 kB' 'SUnreclaim: 840076 kB' 'KernelStack: 27520 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985172 kB' 'Committed_AS: 8466616 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235708 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.349 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.350 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53543232 kB' 'MemUsed: 12115776 kB' 'SwapCached: 0 kB' 'Active: 5531060 kB' 'Inactive: 3325532 kB' 'Active(anon): 5195604 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8691312 kB' 'Mapped: 122736 kB' 'AnonPages: 168464 kB' 'Shmem: 5030324 kB' 'KernelStack: 13912 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176924 kB' 'Slab: 637264 kB' 'SReclaimable: 176924 kB' 'SUnreclaim: 460340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.351 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679856 kB' 'MemFree: 54835880 kB' 'MemUsed: 5843976 kB' 'SwapCached: 0 kB' 'Active: 1834664 kB' 'Inactive: 200428 kB' 'Active(anon): 1679524 kB' 'Inactive(anon): 0 kB' 'Active(file): 155140 kB' 'Inactive(file): 200428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1567432 kB' 'Mapped: 47132 kB' 'AnonPages: 467876 kB' 'Shmem: 1211864 kB' 'KernelStack: 13512 kB' 'PageTables: 4644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124784 kB' 'Slab: 504520 kB' 'SReclaimable: 124784 kB' 'SUnreclaim: 379736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.352 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.353 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.354 node0=512 expecting 512 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:46.354 node1=1024 expecting 1024 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:46.354 00:03:46.354 real 0m3.647s 00:03:46.354 user 0m1.413s 00:03:46.354 sys 0m2.303s 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:46.354 10:29:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.354 ************************************ 00:03:46.354 END TEST custom_alloc 00:03:46.354 ************************************ 00:03:46.354 10:29:10 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:46.354 10:29:10 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:46.354 10:29:10 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:46.354 10:29:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.354 ************************************ 00:03:46.354 START TEST no_shrink_alloc 00:03:46.354 ************************************ 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.354 10:29:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.664 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:49.664 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109450128 kB' 'MemAvailable: 112793192 kB' 'Buffers: 4132 kB' 'Cached: 10254736 kB' 'SwapCached: 0 kB' 'Active: 7368140 kB' 'Inactive: 3525960 kB' 'Active(anon): 6877544 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638236 kB' 'Mapped: 170980 kB' 'Shmem: 6242312 kB' 'KReclaimable: 301676 kB' 'Slab: 1142108 kB' 'SReclaimable: 301676 kB' 'SUnreclaim: 840432 kB' 'KernelStack: 27440 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8469944 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235868 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.664 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.665 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109453336 kB' 'MemAvailable: 112796400 kB' 'Buffers: 4132 kB' 'Cached: 10254736 kB' 'SwapCached: 0 kB' 'Active: 7367828 kB' 'Inactive: 3525960 kB' 'Active(anon): 6877232 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637948 kB' 'Mapped: 169972 kB' 'Shmem: 6242312 kB' 'KReclaimable: 301676 kB' 'Slab: 1142092 kB' 'SReclaimable: 301676 kB' 'SUnreclaim: 840416 kB' 'KernelStack: 27488 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8467404 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235756 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.666 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.667 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109454212 kB' 'MemAvailable: 112797276 kB' 'Buffers: 4132 kB' 'Cached: 10254760 kB' 'SwapCached: 0 kB' 'Active: 7367180 kB' 'Inactive: 3525960 kB' 'Active(anon): 6876584 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637640 kB' 'Mapped: 169880 kB' 'Shmem: 6242336 kB' 'KReclaimable: 301676 kB' 'Slab: 1141912 kB' 'SReclaimable: 301676 kB' 'SUnreclaim: 840236 kB' 'KernelStack: 27408 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8467428 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235788 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.668 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.669 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.670 nr_hugepages=1024 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.670 resv_hugepages=0 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.670 surplus_hugepages=0 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.670 anon_hugepages=0 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.670 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109455008 kB' 'MemAvailable: 112798072 kB' 'Buffers: 4132 kB' 'Cached: 10254780 kB' 'SwapCached: 0 kB' 'Active: 7367216 kB' 'Inactive: 3525960 kB' 'Active(anon): 6876620 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637648 kB' 'Mapped: 169880 kB' 'Shmem: 6242356 kB' 'KReclaimable: 301676 kB' 'Slab: 1141912 kB' 'SReclaimable: 301676 kB' 'SUnreclaim: 840236 kB' 'KernelStack: 27456 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8465728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235820 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.671 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.672 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52509612 kB' 'MemUsed: 13149396 kB' 'SwapCached: 0 kB' 'Active: 5531968 kB' 'Inactive: 3325532 kB' 'Active(anon): 5196512 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8691428 kB' 'Mapped: 122728 kB' 'AnonPages: 169292 kB' 'Shmem: 5030440 kB' 'KernelStack: 13960 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176892 kB' 'Slab: 637508 kB' 'SReclaimable: 176892 kB' 'SUnreclaim: 460616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.673 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.674 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.675 node0=1024 expecting 1024 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.675 10:29:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.981 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:52.981 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.981 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.982 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109448544 kB' 'MemAvailable: 112791608 kB' 'Buffers: 4132 kB' 'Cached: 10254896 kB' 'SwapCached: 0 kB' 'Active: 7368272 kB' 'Inactive: 3525960 kB' 'Active(anon): 6877676 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638452 kB' 'Mapped: 169900 kB' 'Shmem: 6242472 kB' 'KReclaimable: 301676 kB' 'Slab: 1142108 kB' 'SReclaimable: 301676 kB' 'SUnreclaim: 840432 kB' 'KernelStack: 27424 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8488812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235772 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.982 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109447420 kB' 'MemAvailable: 112790484 kB' 'Buffers: 4132 kB' 'Cached: 10254900 kB' 'SwapCached: 0 kB' 'Active: 7369120 kB' 'Inactive: 3525960 kB' 'Active(anon): 6878524 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639208 kB' 'Mapped: 170392 kB' 'Shmem: 6242476 kB' 'KReclaimable: 301676 kB' 'Slab: 1142128 kB' 'SReclaimable: 301676 kB' 'SUnreclaim: 840452 kB' 'KernelStack: 27472 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8468244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235676 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.984 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.252 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.253 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109441184 kB' 'MemAvailable: 112784248 kB' 'Buffers: 4132 kB' 'Cached: 10254916 kB' 'SwapCached: 0 kB' 'Active: 7373272 kB' 'Inactive: 3525960 kB' 'Active(anon): 6882676 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644324 kB' 'Mapped: 170400 kB' 'Shmem: 6242492 kB' 'KReclaimable: 301676 kB' 'Slab: 1142196 kB' 'SReclaimable: 301676 kB' 'SUnreclaim: 840520 kB' 'KernelStack: 27472 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8474364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235744 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.254 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.255 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.256 nr_hugepages=1024 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.256 resv_hugepages=0 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.256 surplus_hugepages=0 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.256 anon_hugepages=0 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338864 kB' 'MemFree: 109445168 kB' 'MemAvailable: 112788232 kB' 'Buffers: 4132 kB' 'Cached: 10254940 kB' 'SwapCached: 0 kB' 'Active: 7368336 kB' 'Inactive: 3525960 kB' 'Active(anon): 6877740 kB' 'Inactive(anon): 0 kB' 'Active(file): 490596 kB' 'Inactive(file): 3525960 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638500 kB' 'Mapped: 170304 kB' 'Shmem: 6242516 kB' 'KReclaimable: 301676 kB' 'Slab: 1142196 kB' 'SReclaimable: 301676 kB' 'SUnreclaim: 840520 kB' 'KernelStack: 27408 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509460 kB' 'Committed_AS: 8468268 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235756 kB' 'VmallocChunk: 0 kB' 'Percpu: 119232 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4013428 kB' 'DirectMap2M: 43900928 kB' 'DirectMap1G: 88080384 kB' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.256 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.257 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 52511952 kB' 'MemUsed: 13147056 kB' 'SwapCached: 0 kB' 'Active: 5530820 kB' 'Inactive: 3325532 kB' 'Active(anon): 5195364 kB' 'Inactive(anon): 0 kB' 'Active(file): 335456 kB' 'Inactive(file): 3325532 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8691512 kB' 'Mapped: 122720 kB' 'AnonPages: 167968 kB' 'Shmem: 5030524 kB' 'KernelStack: 14136 kB' 'PageTables: 4816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 176892 kB' 'Slab: 637556 kB' 'SReclaimable: 176892 kB' 'SUnreclaim: 460664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.258 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:53.259 node0=1024 expecting 1024 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:53.259 00:03:53.259 real 0m7.253s 00:03:53.259 user 0m2.853s 00:03:53.259 sys 0m4.520s 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:53.259 10:29:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.259 ************************************ 00:03:53.259 END TEST no_shrink_alloc 00:03:53.259 ************************************ 00:03:53.259 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:53.259 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:53.259 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.259 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.259 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.259 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.259 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.260 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.260 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.260 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.260 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.260 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:53.260 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:53.260 10:29:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:53.260 00:03:53.260 real 0m26.071s 00:03:53.260 user 0m10.250s 00:03:53.260 sys 0m16.187s 00:03:53.260 10:29:17 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:53.260 10:29:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.260 ************************************ 00:03:53.260 END TEST hugepages 00:03:53.260 ************************************ 00:03:53.260 10:29:17 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:53.260 10:29:17 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:53.260 10:29:17 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:53.260 10:29:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.260 ************************************ 00:03:53.260 START TEST driver 00:03:53.260 ************************************ 00:03:53.260 10:29:17 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:53.584 * Looking for test storage... 00:03:53.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:53.584 10:29:17 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:53.584 10:29:17 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.584 10:29:17 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.899 10:29:22 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:58.899 10:29:22 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:58.899 10:29:22 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:58.899 10:29:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.899 ************************************ 00:03:58.899 START TEST guess_driver 00:03:58.899 ************************************ 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:58.899 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:58.899 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:58.899 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:58.899 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:58.899 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:58.899 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:58.899 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:58.899 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:58.900 Looking for driver=vfio-pci 00:03:58.900 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:58.900 10:29:22 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:58.900 10:29:22 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.900 10:29:22 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.448 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.710 10:29:25 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.996 00:04:06.996 real 0m7.934s 00:04:06.996 user 0m2.397s 00:04:06.996 sys 0m4.714s 00:04:06.996 10:29:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:06.996 10:29:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:06.996 ************************************ 00:04:06.996 END TEST guess_driver 00:04:06.996 ************************************ 00:04:06.996 00:04:06.996 real 0m12.768s 00:04:06.996 user 0m3.918s 00:04:06.996 sys 0m7.295s 00:04:06.996 10:29:30 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:06.996 10:29:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:06.996 ************************************ 00:04:06.996 END TEST driver 00:04:06.996 ************************************ 00:04:06.996 10:29:30 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:06.996 10:29:30 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:06.996 10:29:30 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:06.996 10:29:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.996 ************************************ 00:04:06.996 START TEST devices 00:04:06.996 ************************************ 00:04:06.996 10:29:30 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:06.996 * Looking for test storage... 00:04:06.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.996 10:29:30 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:06.996 10:29:30 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:06.996 10:29:30 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.996 10:29:30 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.298 10:29:34 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:10.298 10:29:34 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:10.298 10:29:34 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:10.298 10:29:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:10.298 10:29:34 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:10.298 10:29:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:10.298 10:29:34 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:10.298 10:29:34 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.298 10:29:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:10.298 10:29:34 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:10.298 10:29:34 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:10.298 10:29:34 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:10.298 10:29:34 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:10.298 10:29:34 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:10.299 10:29:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:10.299 10:29:34 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:10.299 No valid GPT data, bailing 00:04:10.299 10:29:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.299 10:29:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:10.299 10:29:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:10.299 10:29:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:10.299 10:29:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:10.299 10:29:34 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:10.299 10:29:34 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:10.299 10:29:34 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:10.299 10:29:34 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:10.299 10:29:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.299 ************************************ 00:04:10.299 START TEST nvme_mount 00:04:10.299 ************************************ 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:10.299 10:29:34 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:11.239 Creating new GPT entries in memory. 00:04:11.239 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:11.239 other utilities. 00:04:11.239 10:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:11.239 10:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.239 10:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:11.239 10:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:11.239 10:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:12.181 Creating new GPT entries in memory. 00:04:12.181 The operation has completed successfully. 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 595125 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.181 10:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:15.481 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:15.742 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.742 10:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:16.004 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:16.004 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:16.004 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:16.004 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.004 10:29:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.310 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.310 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.310 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:19.311 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.572 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.573 10:29:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.877 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.877 00:04:22.877 real 0m12.842s 00:04:22.877 user 0m3.928s 00:04:22.877 sys 0m6.805s 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:22.877 10:29:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:22.877 ************************************ 00:04:22.877 END TEST nvme_mount 00:04:22.877 ************************************ 00:04:23.138 10:29:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:23.138 10:29:47 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:23.138 10:29:47 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:23.138 10:29:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:23.138 ************************************ 00:04:23.138 START TEST dm_mount 00:04:23.138 ************************************ 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.138 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.139 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.139 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.139 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:23.139 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.139 10:29:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:24.081 Creating new GPT entries in memory. 00:04:24.081 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.081 other utilities. 00:04:24.081 10:29:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.081 10:29:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.081 10:29:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.081 10:29:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.081 10:29:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.023 Creating new GPT entries in memory. 00:04:25.023 The operation has completed successfully. 00:04:25.023 10:29:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:25.023 10:29:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.023 10:29:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.023 10:29:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.023 10:29:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:26.408 The operation has completed successfully. 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 600218 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.408 10:29:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.710 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.711 10:29:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:33.014 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:33.014 00:04:33.014 real 0m9.984s 00:04:33.014 user 0m2.489s 00:04:33.014 sys 0m4.570s 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:33.014 10:29:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:33.014 ************************************ 00:04:33.014 END TEST dm_mount 00:04:33.014 ************************************ 00:04:33.014 10:29:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:33.014 10:29:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:33.014 10:29:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.014 10:29:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.014 10:29:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.014 10:29:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.014 10:29:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.275 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:33.275 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:33.275 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.275 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.275 10:29:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:33.275 10:29:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:33.275 10:29:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.275 10:29:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.275 10:29:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.275 10:29:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.275 10:29:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:33.275 00:04:33.275 real 0m27.194s 00:04:33.275 user 0m7.973s 00:04:33.275 sys 0m14.070s 00:04:33.275 10:29:57 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:33.275 10:29:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.275 ************************************ 00:04:33.275 END TEST devices 00:04:33.275 ************************************ 00:04:33.536 00:04:33.536 real 1m31.290s 00:04:33.536 user 0m30.548s 00:04:33.536 sys 0m52.283s 00:04:33.536 10:29:57 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:33.536 10:29:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.536 ************************************ 00:04:33.536 END TEST setup.sh 00:04:33.536 ************************************ 00:04:33.536 10:29:57 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:36.838 Hugepages 00:04:36.838 node hugesize free / total 00:04:36.838 node0 1048576kB 0 / 0 00:04:36.838 node0 2048kB 2048 / 2048 00:04:36.838 node1 1048576kB 0 / 0 00:04:36.838 node1 2048kB 0 / 0 00:04:36.838 00:04:36.838 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:36.838 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:36.838 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:36.838 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:36.838 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:36.838 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:36.838 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:36.838 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:36.838 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:37.097 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:37.098 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:37.098 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:37.098 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:37.098 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:37.098 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:37.098 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:37.098 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:37.098 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:37.098 10:30:01 -- spdk/autotest.sh@130 -- # uname -s 00:04:37.098 10:30:01 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:37.098 10:30:01 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:37.098 10:30:01 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.495 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:40.495 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:42.404 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:42.404 10:30:06 -- common/autotest_common.sh@1531 -- # sleep 1 00:04:43.344 10:30:07 -- common/autotest_common.sh@1532 -- # bdfs=() 00:04:43.344 10:30:07 -- common/autotest_common.sh@1532 -- # local bdfs 00:04:43.344 10:30:07 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:04:43.344 10:30:07 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:04:43.344 10:30:07 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:43.344 10:30:07 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:43.344 10:30:07 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.344 10:30:07 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.344 10:30:07 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:43.605 10:30:07 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:43.605 10:30:07 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:04:43.605 10:30:07 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.908 Waiting for block devices as requested 00:04:46.908 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:46.908 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:47.169 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:47.169 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:47.169 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:47.430 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:47.430 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:47.430 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:47.691 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:47.691 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:47.691 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:47.970 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:47.970 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:47.970 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:47.970 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:48.237 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:48.237 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:48.237 10:30:12 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:04:48.237 10:30:12 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:48.237 10:30:12 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:04:48.237 10:30:12 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:04:48.237 10:30:12 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:48.237 10:30:12 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:48.237 10:30:12 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:48.237 10:30:12 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:04:48.237 10:30:12 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:04:48.237 10:30:12 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:04:48.237 10:30:12 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:04:48.237 10:30:12 -- common/autotest_common.sh@1544 -- # grep oacs 00:04:48.237 10:30:12 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:04:48.237 10:30:12 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:04:48.237 10:30:12 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:04:48.237 10:30:12 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:04:48.237 10:30:12 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:04:48.237 10:30:12 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:04:48.237 10:30:12 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:04:48.237 10:30:12 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:04:48.237 10:30:12 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:04:48.237 10:30:12 -- common/autotest_common.sh@1556 -- # continue 00:04:48.237 10:30:12 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:48.237 10:30:12 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:48.237 10:30:12 -- common/autotest_common.sh@10 -- # set +x 00:04:48.237 10:30:12 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:48.237 10:30:12 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:48.237 10:30:12 -- common/autotest_common.sh@10 -- # set +x 00:04:48.237 10:30:12 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.445 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.445 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:52.445 10:30:16 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:52.445 10:30:16 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:52.445 10:30:16 -- common/autotest_common.sh@10 -- # set +x 00:04:52.445 10:30:16 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:52.445 10:30:16 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:52.445 10:30:16 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:52.445 10:30:16 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:52.445 10:30:16 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:52.445 10:30:16 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:52.445 10:30:16 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:52.445 10:30:16 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:52.445 10:30:16 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.445 10:30:16 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.445 10:30:16 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:52.445 10:30:16 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:52.445 10:30:16 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:04:52.445 10:30:16 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:52.445 10:30:16 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:52.445 10:30:16 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:04:52.445 10:30:16 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:52.445 10:30:16 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:04:52.445 10:30:16 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:04:52.445 10:30:16 -- common/autotest_common.sh@1592 -- # return 0 00:04:52.445 10:30:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:52.445 10:30:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:52.445 10:30:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.445 10:30:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:52.445 10:30:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:52.445 10:30:16 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:52.445 10:30:16 -- common/autotest_common.sh@10 -- # set +x 00:04:52.445 10:30:16 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:52.445 10:30:16 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:52.445 10:30:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.445 10:30:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.445 10:30:16 -- common/autotest_common.sh@10 -- # set +x 00:04:52.445 ************************************ 00:04:52.445 START TEST env 00:04:52.445 ************************************ 00:04:52.445 10:30:16 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:52.445 * Looking for test storage... 00:04:52.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:52.445 10:30:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:52.445 10:30:16 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.445 10:30:16 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.445 10:30:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.445 ************************************ 00:04:52.445 START TEST env_memory 00:04:52.445 ************************************ 00:04:52.445 10:30:16 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:52.445 00:04:52.445 00:04:52.445 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.445 http://cunit.sourceforge.net/ 00:04:52.445 00:04:52.445 00:04:52.445 Suite: memory 00:04:52.445 Test: alloc and free memory map ...[2024-06-10 10:30:16.476086] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:52.445 passed 00:04:52.445 Test: mem map translation ...[2024-06-10 10:30:16.501644] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:52.445 [2024-06-10 10:30:16.501676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:52.445 [2024-06-10 10:30:16.501724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:52.445 [2024-06-10 10:30:16.501732] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:52.445 passed 00:04:52.445 Test: mem map registration ...[2024-06-10 10:30:16.556748] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:52.445 [2024-06-10 10:30:16.556763] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:52.445 passed 00:04:52.445 Test: mem map adjacent registrations ...passed 00:04:52.445 00:04:52.445 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.445 suites 1 1 n/a 0 0 00:04:52.445 tests 4 4 4 0 0 00:04:52.445 asserts 152 152 152 0 n/a 00:04:52.445 00:04:52.445 Elapsed time = 0.192 seconds 00:04:52.445 00:04:52.445 real 0m0.206s 00:04:52.445 user 0m0.195s 00:04:52.445 sys 0m0.010s 00:04:52.445 10:30:16 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.445 10:30:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:52.445 ************************************ 00:04:52.445 END TEST env_memory 00:04:52.445 ************************************ 00:04:52.445 10:30:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:52.445 10:30:16 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.445 10:30:16 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.445 10:30:16 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.445 ************************************ 00:04:52.445 START TEST env_vtophys 00:04:52.445 ************************************ 00:04:52.445 10:30:16 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:52.445 EAL: lib.eal log level changed from notice to debug 00:04:52.445 EAL: Detected lcore 0 as core 0 on socket 0 00:04:52.445 EAL: Detected lcore 1 as core 1 on socket 0 00:04:52.445 EAL: Detected lcore 2 as core 2 on socket 0 00:04:52.445 EAL: Detected lcore 3 as core 3 on socket 0 00:04:52.445 EAL: Detected lcore 4 as core 4 on socket 0 00:04:52.445 EAL: Detected lcore 5 as core 5 on socket 0 00:04:52.446 EAL: Detected lcore 6 as core 6 on socket 0 00:04:52.446 EAL: Detected lcore 7 as core 7 on socket 0 00:04:52.446 EAL: Detected lcore 8 as core 8 on socket 0 00:04:52.446 EAL: Detected lcore 9 as core 9 on socket 0 00:04:52.446 EAL: Detected lcore 10 as core 10 on socket 0 00:04:52.446 EAL: Detected lcore 11 as core 11 on socket 0 00:04:52.446 EAL: Detected lcore 12 as core 12 on socket 0 00:04:52.446 EAL: Detected lcore 13 as core 13 on socket 0 00:04:52.446 EAL: Detected lcore 14 as core 14 on socket 0 00:04:52.446 EAL: Detected lcore 15 as core 15 on socket 0 00:04:52.446 EAL: Detected lcore 16 as core 16 on socket 0 00:04:52.446 EAL: Detected lcore 17 as core 17 on socket 0 00:04:52.446 EAL: Detected lcore 18 as core 18 on socket 0 00:04:52.446 EAL: Detected lcore 19 as core 19 on socket 0 00:04:52.446 EAL: Detected lcore 20 as core 20 on socket 0 00:04:52.446 EAL: Detected lcore 21 as core 21 on socket 0 00:04:52.446 EAL: Detected lcore 22 as core 22 on socket 0 00:04:52.446 EAL: Detected lcore 23 as core 23 on socket 0 00:04:52.446 EAL: Detected lcore 24 as core 24 on socket 0 00:04:52.446 EAL: Detected lcore 25 as core 25 on socket 0 00:04:52.446 EAL: Detected lcore 26 as core 26 on socket 0 00:04:52.446 EAL: Detected lcore 27 as core 27 on socket 0 00:04:52.446 EAL: Detected lcore 28 as core 28 on socket 0 00:04:52.446 EAL: Detected lcore 29 as core 29 on socket 0 00:04:52.446 EAL: Detected lcore 30 as core 30 on socket 0 00:04:52.446 EAL: Detected lcore 31 as core 31 on socket 0 00:04:52.446 EAL: Detected lcore 32 as core 32 on socket 0 00:04:52.446 EAL: Detected lcore 33 as core 33 on socket 0 00:04:52.446 EAL: Detected lcore 34 as core 34 on socket 0 00:04:52.446 EAL: Detected lcore 35 as core 35 on socket 0 00:04:52.446 EAL: Detected lcore 36 as core 0 on socket 1 00:04:52.446 EAL: Detected lcore 37 as core 1 on socket 1 00:04:52.446 EAL: Detected lcore 38 as core 2 on socket 1 00:04:52.446 EAL: Detected lcore 39 as core 3 on socket 1 00:04:52.446 EAL: Detected lcore 40 as core 4 on socket 1 00:04:52.446 EAL: Detected lcore 41 as core 5 on socket 1 00:04:52.446 EAL: Detected lcore 42 as core 6 on socket 1 00:04:52.446 EAL: Detected lcore 43 as core 7 on socket 1 00:04:52.446 EAL: Detected lcore 44 as core 8 on socket 1 00:04:52.446 EAL: Detected lcore 45 as core 9 on socket 1 00:04:52.446 EAL: Detected lcore 46 as core 10 on socket 1 00:04:52.446 EAL: Detected lcore 47 as core 11 on socket 1 00:04:52.446 EAL: Detected lcore 48 as core 12 on socket 1 00:04:52.446 EAL: Detected lcore 49 as core 13 on socket 1 00:04:52.446 EAL: Detected lcore 50 as core 14 on socket 1 00:04:52.446 EAL: Detected lcore 51 as core 15 on socket 1 00:04:52.446 EAL: Detected lcore 52 as core 16 on socket 1 00:04:52.446 EAL: Detected lcore 53 as core 17 on socket 1 00:04:52.446 EAL: Detected lcore 54 as core 18 on socket 1 00:04:52.446 EAL: Detected lcore 55 as core 19 on socket 1 00:04:52.446 EAL: Detected lcore 56 as core 20 on socket 1 00:04:52.446 EAL: Detected lcore 57 as core 21 on socket 1 00:04:52.446 EAL: Detected lcore 58 as core 22 on socket 1 00:04:52.446 EAL: Detected lcore 59 as core 23 on socket 1 00:04:52.446 EAL: Detected lcore 60 as core 24 on socket 1 00:04:52.446 EAL: Detected lcore 61 as core 25 on socket 1 00:04:52.446 EAL: Detected lcore 62 as core 26 on socket 1 00:04:52.446 EAL: Detected lcore 63 as core 27 on socket 1 00:04:52.446 EAL: Detected lcore 64 as core 28 on socket 1 00:04:52.446 EAL: Detected lcore 65 as core 29 on socket 1 00:04:52.446 EAL: Detected lcore 66 as core 30 on socket 1 00:04:52.446 EAL: Detected lcore 67 as core 31 on socket 1 00:04:52.446 EAL: Detected lcore 68 as core 32 on socket 1 00:04:52.446 EAL: Detected lcore 69 as core 33 on socket 1 00:04:52.446 EAL: Detected lcore 70 as core 34 on socket 1 00:04:52.446 EAL: Detected lcore 71 as core 35 on socket 1 00:04:52.446 EAL: Detected lcore 72 as core 0 on socket 0 00:04:52.446 EAL: Detected lcore 73 as core 1 on socket 0 00:04:52.446 EAL: Detected lcore 74 as core 2 on socket 0 00:04:52.446 EAL: Detected lcore 75 as core 3 on socket 0 00:04:52.446 EAL: Detected lcore 76 as core 4 on socket 0 00:04:52.446 EAL: Detected lcore 77 as core 5 on socket 0 00:04:52.446 EAL: Detected lcore 78 as core 6 on socket 0 00:04:52.446 EAL: Detected lcore 79 as core 7 on socket 0 00:04:52.446 EAL: Detected lcore 80 as core 8 on socket 0 00:04:52.446 EAL: Detected lcore 81 as core 9 on socket 0 00:04:52.446 EAL: Detected lcore 82 as core 10 on socket 0 00:04:52.446 EAL: Detected lcore 83 as core 11 on socket 0 00:04:52.446 EAL: Detected lcore 84 as core 12 on socket 0 00:04:52.446 EAL: Detected lcore 85 as core 13 on socket 0 00:04:52.446 EAL: Detected lcore 86 as core 14 on socket 0 00:04:52.446 EAL: Detected lcore 87 as core 15 on socket 0 00:04:52.446 EAL: Detected lcore 88 as core 16 on socket 0 00:04:52.446 EAL: Detected lcore 89 as core 17 on socket 0 00:04:52.446 EAL: Detected lcore 90 as core 18 on socket 0 00:04:52.446 EAL: Detected lcore 91 as core 19 on socket 0 00:04:52.446 EAL: Detected lcore 92 as core 20 on socket 0 00:04:52.446 EAL: Detected lcore 93 as core 21 on socket 0 00:04:52.446 EAL: Detected lcore 94 as core 22 on socket 0 00:04:52.446 EAL: Detected lcore 95 as core 23 on socket 0 00:04:52.446 EAL: Detected lcore 96 as core 24 on socket 0 00:04:52.446 EAL: Detected lcore 97 as core 25 on socket 0 00:04:52.446 EAL: Detected lcore 98 as core 26 on socket 0 00:04:52.446 EAL: Detected lcore 99 as core 27 on socket 0 00:04:52.446 EAL: Detected lcore 100 as core 28 on socket 0 00:04:52.446 EAL: Detected lcore 101 as core 29 on socket 0 00:04:52.446 EAL: Detected lcore 102 as core 30 on socket 0 00:04:52.446 EAL: Detected lcore 103 as core 31 on socket 0 00:04:52.446 EAL: Detected lcore 104 as core 32 on socket 0 00:04:52.446 EAL: Detected lcore 105 as core 33 on socket 0 00:04:52.446 EAL: Detected lcore 106 as core 34 on socket 0 00:04:52.446 EAL: Detected lcore 107 as core 35 on socket 0 00:04:52.446 EAL: Detected lcore 108 as core 0 on socket 1 00:04:52.446 EAL: Detected lcore 109 as core 1 on socket 1 00:04:52.446 EAL: Detected lcore 110 as core 2 on socket 1 00:04:52.446 EAL: Detected lcore 111 as core 3 on socket 1 00:04:52.446 EAL: Detected lcore 112 as core 4 on socket 1 00:04:52.446 EAL: Detected lcore 113 as core 5 on socket 1 00:04:52.446 EAL: Detected lcore 114 as core 6 on socket 1 00:04:52.446 EAL: Detected lcore 115 as core 7 on socket 1 00:04:52.446 EAL: Detected lcore 116 as core 8 on socket 1 00:04:52.446 EAL: Detected lcore 117 as core 9 on socket 1 00:04:52.446 EAL: Detected lcore 118 as core 10 on socket 1 00:04:52.446 EAL: Detected lcore 119 as core 11 on socket 1 00:04:52.446 EAL: Detected lcore 120 as core 12 on socket 1 00:04:52.446 EAL: Detected lcore 121 as core 13 on socket 1 00:04:52.446 EAL: Detected lcore 122 as core 14 on socket 1 00:04:52.446 EAL: Detected lcore 123 as core 15 on socket 1 00:04:52.446 EAL: Detected lcore 124 as core 16 on socket 1 00:04:52.446 EAL: Detected lcore 125 as core 17 on socket 1 00:04:52.446 EAL: Detected lcore 126 as core 18 on socket 1 00:04:52.446 EAL: Detected lcore 127 as core 19 on socket 1 00:04:52.446 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:52.446 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:52.446 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:52.446 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:52.446 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:52.446 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:52.446 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:52.446 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:52.446 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:52.446 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:52.446 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:52.446 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:52.446 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:52.446 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:52.446 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:52.446 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:52.446 EAL: Maximum logical cores by configuration: 128 00:04:52.446 EAL: Detected CPU lcores: 128 00:04:52.446 EAL: Detected NUMA nodes: 2 00:04:52.446 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:52.446 EAL: Detected shared linkage of DPDK 00:04:52.708 EAL: No shared files mode enabled, IPC will be disabled 00:04:52.708 EAL: Bus pci wants IOVA as 'DC' 00:04:52.708 EAL: Buses did not request a specific IOVA mode. 00:04:52.708 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:52.708 EAL: Selected IOVA mode 'VA' 00:04:52.708 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.708 EAL: Probing VFIO support... 00:04:52.708 EAL: IOMMU type 1 (Type 1) is supported 00:04:52.708 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:52.708 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:52.708 EAL: VFIO support initialized 00:04:52.708 EAL: Ask a virtual area of 0x2e000 bytes 00:04:52.708 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:52.708 EAL: Setting up physically contiguous memory... 00:04:52.709 EAL: Setting maximum number of open files to 524288 00:04:52.709 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:52.709 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:52.709 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:52.709 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.709 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:52.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.709 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.709 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:52.709 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:52.709 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.709 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:52.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.709 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.709 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:52.709 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:52.709 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.709 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:52.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.709 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.709 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:52.709 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:52.709 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.709 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:52.709 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:52.709 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.709 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:52.709 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:52.709 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:52.709 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.709 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:52.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.709 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.709 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:52.709 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:52.709 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.709 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:52.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.709 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.709 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:52.709 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:52.709 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.709 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:52.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.709 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.709 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:52.709 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:52.709 EAL: Ask a virtual area of 0x61000 bytes 00:04:52.709 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:52.709 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:52.709 EAL: Ask a virtual area of 0x400000000 bytes 00:04:52.709 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:52.709 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:52.709 EAL: Hugepages will be freed exactly as allocated. 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: TSC frequency is ~2400000 KHz 00:04:52.709 EAL: Main lcore 0 is ready (tid=7fe0360d8a00;cpuset=[0]) 00:04:52.709 EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.709 EAL: Restoring previous memory policy: 0 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was expanded by 2MB 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:52.709 EAL: Mem event callback 'spdk:(nil)' registered 00:04:52.709 00:04:52.709 00:04:52.709 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.709 http://cunit.sourceforge.net/ 00:04:52.709 00:04:52.709 00:04:52.709 Suite: components_suite 00:04:52.709 Test: vtophys_malloc_test ...passed 00:04:52.709 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.709 EAL: Restoring previous memory policy: 4 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was expanded by 4MB 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was shrunk by 4MB 00:04:52.709 EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.709 EAL: Restoring previous memory policy: 4 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was expanded by 6MB 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was shrunk by 6MB 00:04:52.709 EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.709 EAL: Restoring previous memory policy: 4 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was expanded by 10MB 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was shrunk by 10MB 00:04:52.709 EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.709 EAL: Restoring previous memory policy: 4 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was expanded by 18MB 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was shrunk by 18MB 00:04:52.709 EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.709 EAL: Restoring previous memory policy: 4 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was expanded by 34MB 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was shrunk by 34MB 00:04:52.709 EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.709 EAL: Restoring previous memory policy: 4 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was expanded by 66MB 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was shrunk by 66MB 00:04:52.709 EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.709 EAL: Restoring previous memory policy: 4 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was expanded by 130MB 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was shrunk by 130MB 00:04:52.709 EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.709 EAL: Restoring previous memory policy: 4 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was expanded by 258MB 00:04:52.709 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.709 EAL: request: mp_malloc_sync 00:04:52.709 EAL: No shared files mode enabled, IPC is disabled 00:04:52.709 EAL: Heap on socket 0 was shrunk by 258MB 00:04:52.709 EAL: Trying to obtain current memory policy. 00:04:52.709 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.970 EAL: Restoring previous memory policy: 4 00:04:52.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.970 EAL: request: mp_malloc_sync 00:04:52.970 EAL: No shared files mode enabled, IPC is disabled 00:04:52.970 EAL: Heap on socket 0 was expanded by 514MB 00:04:52.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.970 EAL: request: mp_malloc_sync 00:04:52.970 EAL: No shared files mode enabled, IPC is disabled 00:04:52.970 EAL: Heap on socket 0 was shrunk by 514MB 00:04:52.970 EAL: Trying to obtain current memory policy. 00:04:52.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.231 EAL: Restoring previous memory policy: 4 00:04:53.231 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.231 EAL: request: mp_malloc_sync 00:04:53.231 EAL: No shared files mode enabled, IPC is disabled 00:04:53.231 EAL: Heap on socket 0 was expanded by 1026MB 00:04:53.231 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.231 EAL: request: mp_malloc_sync 00:04:53.231 EAL: No shared files mode enabled, IPC is disabled 00:04:53.231 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:53.231 passed 00:04:53.231 00:04:53.231 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.231 suites 1 1 n/a 0 0 00:04:53.231 tests 2 2 2 0 0 00:04:53.231 asserts 497 497 497 0 n/a 00:04:53.231 00:04:53.231 Elapsed time = 0.647 seconds 00:04:53.231 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.231 EAL: request: mp_malloc_sync 00:04:53.231 EAL: No shared files mode enabled, IPC is disabled 00:04:53.231 EAL: Heap on socket 0 was shrunk by 2MB 00:04:53.231 EAL: No shared files mode enabled, IPC is disabled 00:04:53.231 EAL: No shared files mode enabled, IPC is disabled 00:04:53.231 EAL: No shared files mode enabled, IPC is disabled 00:04:53.231 00:04:53.231 real 0m0.770s 00:04:53.231 user 0m0.405s 00:04:53.231 sys 0m0.331s 00:04:53.231 10:30:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.231 10:30:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:53.231 ************************************ 00:04:53.231 END TEST env_vtophys 00:04:53.231 ************************************ 00:04:53.231 10:30:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:53.231 10:30:17 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.231 10:30:17 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.231 10:30:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.491 ************************************ 00:04:53.491 START TEST env_pci 00:04:53.491 ************************************ 00:04:53.491 10:30:17 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:53.491 00:04:53.491 00:04:53.491 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.491 http://cunit.sourceforge.net/ 00:04:53.491 00:04:53.491 00:04:53.491 Suite: pci 00:04:53.491 Test: pci_hook ...[2024-06-10 10:30:17.567195] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 611970 has claimed it 00:04:53.491 EAL: Cannot find device (10000:00:01.0) 00:04:53.491 EAL: Failed to attach device on primary process 00:04:53.491 passed 00:04:53.491 00:04:53.491 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.491 suites 1 1 n/a 0 0 00:04:53.491 tests 1 1 1 0 0 00:04:53.491 asserts 25 25 25 0 n/a 00:04:53.491 00:04:53.491 Elapsed time = 0.030 seconds 00:04:53.491 00:04:53.491 real 0m0.050s 00:04:53.491 user 0m0.014s 00:04:53.491 sys 0m0.036s 00:04:53.491 10:30:17 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.491 10:30:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:53.491 ************************************ 00:04:53.491 END TEST env_pci 00:04:53.491 ************************************ 00:04:53.491 10:30:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:53.491 10:30:17 env -- env/env.sh@15 -- # uname 00:04:53.491 10:30:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:53.491 10:30:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:53.491 10:30:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.491 10:30:17 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:04:53.491 10:30:17 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.491 10:30:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.491 ************************************ 00:04:53.491 START TEST env_dpdk_post_init 00:04:53.491 ************************************ 00:04:53.491 10:30:17 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:53.491 EAL: Detected CPU lcores: 128 00:04:53.491 EAL: Detected NUMA nodes: 2 00:04:53.491 EAL: Detected shared linkage of DPDK 00:04:53.491 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:53.491 EAL: Selected IOVA mode 'VA' 00:04:53.491 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.491 EAL: VFIO support initialized 00:04:53.491 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:53.752 EAL: Using IOMMU type 1 (Type 1) 00:04:53.752 EAL: Ignore mapping IO port bar(1) 00:04:53.752 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:54.012 EAL: Ignore mapping IO port bar(1) 00:04:54.012 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:54.272 EAL: Ignore mapping IO port bar(1) 00:04:54.272 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:54.533 EAL: Ignore mapping IO port bar(1) 00:04:54.533 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:54.533 EAL: Ignore mapping IO port bar(1) 00:04:54.794 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:54.794 EAL: Ignore mapping IO port bar(1) 00:04:55.055 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:55.055 EAL: Ignore mapping IO port bar(1) 00:04:55.316 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:55.316 EAL: Ignore mapping IO port bar(1) 00:04:55.316 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:55.577 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:55.838 EAL: Ignore mapping IO port bar(1) 00:04:55.838 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:56.099 EAL: Ignore mapping IO port bar(1) 00:04:56.099 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:56.099 EAL: Ignore mapping IO port bar(1) 00:04:56.360 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:56.360 EAL: Ignore mapping IO port bar(1) 00:04:56.622 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:56.622 EAL: Ignore mapping IO port bar(1) 00:04:56.882 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:56.882 EAL: Ignore mapping IO port bar(1) 00:04:56.882 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:57.144 EAL: Ignore mapping IO port bar(1) 00:04:57.144 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:57.405 EAL: Ignore mapping IO port bar(1) 00:04:57.405 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:57.405 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:57.405 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:57.665 Starting DPDK initialization... 00:04:57.665 Starting SPDK post initialization... 00:04:57.665 SPDK NVMe probe 00:04:57.666 Attaching to 0000:65:00.0 00:04:57.666 Attached to 0000:65:00.0 00:04:57.666 Cleaning up... 00:04:59.116 00:04:59.116 real 0m5.714s 00:04:59.116 user 0m0.182s 00:04:59.116 sys 0m0.078s 00:04:59.116 10:30:23 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:59.116 10:30:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.116 ************************************ 00:04:59.116 END TEST env_dpdk_post_init 00:04:59.116 ************************************ 00:04:59.378 10:30:23 env -- env/env.sh@26 -- # uname 00:04:59.378 10:30:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:59.378 10:30:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.378 10:30:23 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:59.378 10:30:23 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:59.378 10:30:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.378 ************************************ 00:04:59.378 START TEST env_mem_callbacks 00:04:59.378 ************************************ 00:04:59.378 10:30:23 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.378 EAL: Detected CPU lcores: 128 00:04:59.378 EAL: Detected NUMA nodes: 2 00:04:59.378 EAL: Detected shared linkage of DPDK 00:04:59.378 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.378 EAL: Selected IOVA mode 'VA' 00:04:59.378 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.378 EAL: VFIO support initialized 00:04:59.378 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.378 00:04:59.378 00:04:59.378 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.378 http://cunit.sourceforge.net/ 00:04:59.378 00:04:59.378 00:04:59.378 Suite: memory 00:04:59.378 Test: test ... 00:04:59.378 register 0x200000200000 2097152 00:04:59.378 malloc 3145728 00:04:59.378 register 0x200000400000 4194304 00:04:59.378 buf 0x200000500000 len 3145728 PASSED 00:04:59.378 malloc 64 00:04:59.378 buf 0x2000004fff40 len 64 PASSED 00:04:59.378 malloc 4194304 00:04:59.378 register 0x200000800000 6291456 00:04:59.378 buf 0x200000a00000 len 4194304 PASSED 00:04:59.378 free 0x200000500000 3145728 00:04:59.378 free 0x2000004fff40 64 00:04:59.378 unregister 0x200000400000 4194304 PASSED 00:04:59.378 free 0x200000a00000 4194304 00:04:59.378 unregister 0x200000800000 6291456 PASSED 00:04:59.378 malloc 8388608 00:04:59.378 register 0x200000400000 10485760 00:04:59.378 buf 0x200000600000 len 8388608 PASSED 00:04:59.378 free 0x200000600000 8388608 00:04:59.378 unregister 0x200000400000 10485760 PASSED 00:04:59.378 passed 00:04:59.378 00:04:59.378 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.378 suites 1 1 n/a 0 0 00:04:59.378 tests 1 1 1 0 0 00:04:59.378 asserts 15 15 15 0 n/a 00:04:59.378 00:04:59.378 Elapsed time = 0.005 seconds 00:04:59.378 00:04:59.378 real 0m0.060s 00:04:59.378 user 0m0.017s 00:04:59.378 sys 0m0.043s 00:04:59.378 10:30:23 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:59.378 10:30:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:59.378 ************************************ 00:04:59.378 END TEST env_mem_callbacks 00:04:59.378 ************************************ 00:04:59.378 00:04:59.378 real 0m7.280s 00:04:59.378 user 0m1.004s 00:04:59.378 sys 0m0.811s 00:04:59.378 10:30:23 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:59.378 10:30:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.378 ************************************ 00:04:59.378 END TEST env 00:04:59.378 ************************************ 00:04:59.378 10:30:23 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.378 10:30:23 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:59.378 10:30:23 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:59.378 10:30:23 -- common/autotest_common.sh@10 -- # set +x 00:04:59.378 ************************************ 00:04:59.378 START TEST rpc 00:04:59.378 ************************************ 00:04:59.378 10:30:23 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.640 * Looking for test storage... 00:04:59.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.640 10:30:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=613231 00:04:59.640 10:30:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.640 10:30:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:59.640 10:30:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 613231 00:04:59.640 10:30:23 rpc -- common/autotest_common.sh@830 -- # '[' -z 613231 ']' 00:04:59.640 10:30:23 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.640 10:30:23 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:59.640 10:30:23 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.640 10:30:23 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:59.640 10:30:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.640 [2024-06-10 10:30:23.815495] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:04:59.640 [2024-06-10 10:30:23.815564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613231 ] 00:04:59.640 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.640 [2024-06-10 10:30:23.882751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.901 [2024-06-10 10:30:23.957978] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:59.901 [2024-06-10 10:30:23.958017] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 613231' to capture a snapshot of events at runtime. 00:04:59.901 [2024-06-10 10:30:23.958025] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:59.901 [2024-06-10 10:30:23.958031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:59.901 [2024-06-10 10:30:23.958037] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid613231 for offline analysis/debug. 00:04:59.901 [2024-06-10 10:30:23.958056] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.474 10:30:24 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:00.474 10:30:24 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:00.474 10:30:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.474 10:30:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.474 10:30:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:00.474 10:30:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:00.474 10:30:24 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.474 10:30:24 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.474 10:30:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.474 ************************************ 00:05:00.474 START TEST rpc_integrity 00:05:00.474 ************************************ 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.474 { 00:05:00.474 "name": "Malloc0", 00:05:00.474 "aliases": [ 00:05:00.474 "c0a1a889-f35b-4541-8181-92ae0fbffefd" 00:05:00.474 ], 00:05:00.474 "product_name": "Malloc disk", 00:05:00.474 "block_size": 512, 00:05:00.474 "num_blocks": 16384, 00:05:00.474 "uuid": "c0a1a889-f35b-4541-8181-92ae0fbffefd", 00:05:00.474 "assigned_rate_limits": { 00:05:00.474 "rw_ios_per_sec": 0, 00:05:00.474 "rw_mbytes_per_sec": 0, 00:05:00.474 "r_mbytes_per_sec": 0, 00:05:00.474 "w_mbytes_per_sec": 0 00:05:00.474 }, 00:05:00.474 "claimed": false, 00:05:00.474 "zoned": false, 00:05:00.474 "supported_io_types": { 00:05:00.474 "read": true, 00:05:00.474 "write": true, 00:05:00.474 "unmap": true, 00:05:00.474 "write_zeroes": true, 00:05:00.474 "flush": true, 00:05:00.474 "reset": true, 00:05:00.474 "compare": false, 00:05:00.474 "compare_and_write": false, 00:05:00.474 "abort": true, 00:05:00.474 "nvme_admin": false, 00:05:00.474 "nvme_io": false 00:05:00.474 }, 00:05:00.474 "memory_domains": [ 00:05:00.474 { 00:05:00.474 "dma_device_id": "system", 00:05:00.474 "dma_device_type": 1 00:05:00.474 }, 00:05:00.474 { 00:05:00.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.474 "dma_device_type": 2 00:05:00.474 } 00:05:00.474 ], 00:05:00.474 "driver_specific": {} 00:05:00.474 } 00:05:00.474 ]' 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.474 [2024-06-10 10:30:24.741734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:00.474 [2024-06-10 10:30:24.741766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.474 [2024-06-10 10:30:24.741779] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc68530 00:05:00.474 [2024-06-10 10:30:24.741786] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.474 [2024-06-10 10:30:24.743073] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.474 [2024-06-10 10:30:24.743093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.474 Passthru0 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.474 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.474 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.736 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.736 { 00:05:00.736 "name": "Malloc0", 00:05:00.736 "aliases": [ 00:05:00.736 "c0a1a889-f35b-4541-8181-92ae0fbffefd" 00:05:00.736 ], 00:05:00.736 "product_name": "Malloc disk", 00:05:00.736 "block_size": 512, 00:05:00.736 "num_blocks": 16384, 00:05:00.736 "uuid": "c0a1a889-f35b-4541-8181-92ae0fbffefd", 00:05:00.736 "assigned_rate_limits": { 00:05:00.736 "rw_ios_per_sec": 0, 00:05:00.736 "rw_mbytes_per_sec": 0, 00:05:00.736 "r_mbytes_per_sec": 0, 00:05:00.736 "w_mbytes_per_sec": 0 00:05:00.736 }, 00:05:00.736 "claimed": true, 00:05:00.736 "claim_type": "exclusive_write", 00:05:00.736 "zoned": false, 00:05:00.736 "supported_io_types": { 00:05:00.736 "read": true, 00:05:00.736 "write": true, 00:05:00.736 "unmap": true, 00:05:00.736 "write_zeroes": true, 00:05:00.736 "flush": true, 00:05:00.736 "reset": true, 00:05:00.736 "compare": false, 00:05:00.736 "compare_and_write": false, 00:05:00.736 "abort": true, 00:05:00.736 "nvme_admin": false, 00:05:00.736 "nvme_io": false 00:05:00.736 }, 00:05:00.736 "memory_domains": [ 00:05:00.736 { 00:05:00.736 "dma_device_id": "system", 00:05:00.736 "dma_device_type": 1 00:05:00.736 }, 00:05:00.736 { 00:05:00.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.736 "dma_device_type": 2 00:05:00.736 } 00:05:00.736 ], 00:05:00.736 "driver_specific": {} 00:05:00.736 }, 00:05:00.736 { 00:05:00.736 "name": "Passthru0", 00:05:00.736 "aliases": [ 00:05:00.736 "42eff46e-a086-5bca-9fc6-7ccfbbd63d82" 00:05:00.736 ], 00:05:00.736 "product_name": "passthru", 00:05:00.736 "block_size": 512, 00:05:00.736 "num_blocks": 16384, 00:05:00.736 "uuid": "42eff46e-a086-5bca-9fc6-7ccfbbd63d82", 00:05:00.736 "assigned_rate_limits": { 00:05:00.736 "rw_ios_per_sec": 0, 00:05:00.736 "rw_mbytes_per_sec": 0, 00:05:00.736 "r_mbytes_per_sec": 0, 00:05:00.736 "w_mbytes_per_sec": 0 00:05:00.736 }, 00:05:00.736 "claimed": false, 00:05:00.736 "zoned": false, 00:05:00.736 "supported_io_types": { 00:05:00.736 "read": true, 00:05:00.736 "write": true, 00:05:00.736 "unmap": true, 00:05:00.736 "write_zeroes": true, 00:05:00.736 "flush": true, 00:05:00.736 "reset": true, 00:05:00.736 "compare": false, 00:05:00.736 "compare_and_write": false, 00:05:00.736 "abort": true, 00:05:00.736 "nvme_admin": false, 00:05:00.736 "nvme_io": false 00:05:00.736 }, 00:05:00.736 "memory_domains": [ 00:05:00.736 { 00:05:00.736 "dma_device_id": "system", 00:05:00.736 "dma_device_type": 1 00:05:00.736 }, 00:05:00.736 { 00:05:00.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.736 "dma_device_type": 2 00:05:00.736 } 00:05:00.736 ], 00:05:00.736 "driver_specific": { 00:05:00.736 "passthru": { 00:05:00.736 "name": "Passthru0", 00:05:00.736 "base_bdev_name": "Malloc0" 00:05:00.736 } 00:05:00.736 } 00:05:00.736 } 00:05:00.736 ]' 00:05:00.736 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.736 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.736 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.736 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.736 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.736 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.736 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.736 10:30:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.736 00:05:00.736 real 0m0.291s 00:05:00.736 user 0m0.190s 00:05:00.736 sys 0m0.033s 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.736 10:30:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 ************************************ 00:05:00.736 END TEST rpc_integrity 00:05:00.736 ************************************ 00:05:00.736 10:30:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:00.736 10:30:24 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.736 10:30:24 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.736 10:30:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 ************************************ 00:05:00.736 START TEST rpc_plugins 00:05:00.736 ************************************ 00:05:00.736 10:30:24 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:00.736 10:30:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.736 10:30:24 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.736 10:30:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 10:30:24 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.736 10:30:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.736 10:30:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.736 10:30:24 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.736 10:30:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.736 10:30:24 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.736 10:30:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.736 { 00:05:00.736 "name": "Malloc1", 00:05:00.736 "aliases": [ 00:05:00.736 "6516b7f8-ce97-450e-9bef-cd6192fdac5d" 00:05:00.736 ], 00:05:00.736 "product_name": "Malloc disk", 00:05:00.736 "block_size": 4096, 00:05:00.736 "num_blocks": 256, 00:05:00.736 "uuid": "6516b7f8-ce97-450e-9bef-cd6192fdac5d", 00:05:00.736 "assigned_rate_limits": { 00:05:00.736 "rw_ios_per_sec": 0, 00:05:00.736 "rw_mbytes_per_sec": 0, 00:05:00.736 "r_mbytes_per_sec": 0, 00:05:00.736 "w_mbytes_per_sec": 0 00:05:00.736 }, 00:05:00.736 "claimed": false, 00:05:00.736 "zoned": false, 00:05:00.736 "supported_io_types": { 00:05:00.736 "read": true, 00:05:00.736 "write": true, 00:05:00.736 "unmap": true, 00:05:00.736 "write_zeroes": true, 00:05:00.736 "flush": true, 00:05:00.736 "reset": true, 00:05:00.736 "compare": false, 00:05:00.736 "compare_and_write": false, 00:05:00.736 "abort": true, 00:05:00.736 "nvme_admin": false, 00:05:00.736 "nvme_io": false 00:05:00.736 }, 00:05:00.736 "memory_domains": [ 00:05:00.736 { 00:05:00.736 "dma_device_id": "system", 00:05:00.736 "dma_device_type": 1 00:05:00.736 }, 00:05:00.736 { 00:05:00.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.736 "dma_device_type": 2 00:05:00.736 } 00:05:00.736 ], 00:05:00.736 "driver_specific": {} 00:05:00.736 } 00:05:00.736 ]' 00:05:00.736 10:30:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:00.997 10:30:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.997 10:30:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.997 10:30:25 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.997 10:30:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.997 10:30:25 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.997 10:30:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.997 10:30:25 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.997 10:30:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.997 10:30:25 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.997 10:30:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.997 10:30:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:00.997 10:30:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.997 00:05:00.997 real 0m0.146s 00:05:00.997 user 0m0.096s 00:05:00.997 sys 0m0.016s 00:05:00.997 10:30:25 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.997 10:30:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.997 ************************************ 00:05:00.997 END TEST rpc_plugins 00:05:00.997 ************************************ 00:05:00.997 10:30:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:00.997 10:30:25 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.997 10:30:25 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.997 10:30:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.997 ************************************ 00:05:00.997 START TEST rpc_trace_cmd_test 00:05:00.997 ************************************ 00:05:00.997 10:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:00.997 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:00.998 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:00.998 10:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.998 10:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:00.998 10:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.998 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:00.998 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid613231", 00:05:00.998 "tpoint_group_mask": "0x8", 00:05:00.998 "iscsi_conn": { 00:05:00.998 "mask": "0x2", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "scsi": { 00:05:00.998 "mask": "0x4", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "bdev": { 00:05:00.998 "mask": "0x8", 00:05:00.998 "tpoint_mask": "0xffffffffffffffff" 00:05:00.998 }, 00:05:00.998 "nvmf_rdma": { 00:05:00.998 "mask": "0x10", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "nvmf_tcp": { 00:05:00.998 "mask": "0x20", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "ftl": { 00:05:00.998 "mask": "0x40", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "blobfs": { 00:05:00.998 "mask": "0x80", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "dsa": { 00:05:00.998 "mask": "0x200", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "thread": { 00:05:00.998 "mask": "0x400", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "nvme_pcie": { 00:05:00.998 "mask": "0x800", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "iaa": { 00:05:00.998 "mask": "0x1000", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "nvme_tcp": { 00:05:00.998 "mask": "0x2000", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "bdev_nvme": { 00:05:00.998 "mask": "0x4000", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 }, 00:05:00.998 "sock": { 00:05:00.998 "mask": "0x8000", 00:05:00.998 "tpoint_mask": "0x0" 00:05:00.998 } 00:05:00.998 }' 00:05:00.998 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:00.998 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:00.998 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:01.259 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:01.259 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:01.259 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:01.259 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:01.259 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:01.259 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:01.259 10:30:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:01.259 00:05:01.259 real 0m0.246s 00:05:01.259 user 0m0.208s 00:05:01.259 sys 0m0.031s 00:05:01.259 10:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:01.259 10:30:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.259 ************************************ 00:05:01.259 END TEST rpc_trace_cmd_test 00:05:01.259 ************************************ 00:05:01.259 10:30:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:01.259 10:30:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:01.259 10:30:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:01.259 10:30:25 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:01.259 10:30:25 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:01.259 10:30:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.259 ************************************ 00:05:01.259 START TEST rpc_daemon_integrity 00:05:01.259 ************************************ 00:05:01.259 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:01.259 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.259 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.259 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.259 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.259 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.259 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.521 { 00:05:01.521 "name": "Malloc2", 00:05:01.521 "aliases": [ 00:05:01.521 "0b905991-7a34-45da-9759-3152cf076e4a" 00:05:01.521 ], 00:05:01.521 "product_name": "Malloc disk", 00:05:01.521 "block_size": 512, 00:05:01.521 "num_blocks": 16384, 00:05:01.521 "uuid": "0b905991-7a34-45da-9759-3152cf076e4a", 00:05:01.521 "assigned_rate_limits": { 00:05:01.521 "rw_ios_per_sec": 0, 00:05:01.521 "rw_mbytes_per_sec": 0, 00:05:01.521 "r_mbytes_per_sec": 0, 00:05:01.521 "w_mbytes_per_sec": 0 00:05:01.521 }, 00:05:01.521 "claimed": false, 00:05:01.521 "zoned": false, 00:05:01.521 "supported_io_types": { 00:05:01.521 "read": true, 00:05:01.521 "write": true, 00:05:01.521 "unmap": true, 00:05:01.521 "write_zeroes": true, 00:05:01.521 "flush": true, 00:05:01.521 "reset": true, 00:05:01.521 "compare": false, 00:05:01.521 "compare_and_write": false, 00:05:01.521 "abort": true, 00:05:01.521 "nvme_admin": false, 00:05:01.521 "nvme_io": false 00:05:01.521 }, 00:05:01.521 "memory_domains": [ 00:05:01.521 { 00:05:01.521 "dma_device_id": "system", 00:05:01.521 "dma_device_type": 1 00:05:01.521 }, 00:05:01.521 { 00:05:01.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.521 "dma_device_type": 2 00:05:01.521 } 00:05:01.521 ], 00:05:01.521 "driver_specific": {} 00:05:01.521 } 00:05:01.521 ]' 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.521 [2024-06-10 10:30:25.644208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:01.521 [2024-06-10 10:30:25.644239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.521 [2024-06-10 10:30:25.644258] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc6ac70 00:05:01.521 [2024-06-10 10:30:25.644265] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.521 [2024-06-10 10:30:25.645487] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.521 [2024-06-10 10:30:25.645507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.521 Passthru0 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.521 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.521 { 00:05:01.521 "name": "Malloc2", 00:05:01.521 "aliases": [ 00:05:01.521 "0b905991-7a34-45da-9759-3152cf076e4a" 00:05:01.521 ], 00:05:01.521 "product_name": "Malloc disk", 00:05:01.521 "block_size": 512, 00:05:01.521 "num_blocks": 16384, 00:05:01.521 "uuid": "0b905991-7a34-45da-9759-3152cf076e4a", 00:05:01.521 "assigned_rate_limits": { 00:05:01.521 "rw_ios_per_sec": 0, 00:05:01.521 "rw_mbytes_per_sec": 0, 00:05:01.521 "r_mbytes_per_sec": 0, 00:05:01.521 "w_mbytes_per_sec": 0 00:05:01.521 }, 00:05:01.521 "claimed": true, 00:05:01.522 "claim_type": "exclusive_write", 00:05:01.522 "zoned": false, 00:05:01.522 "supported_io_types": { 00:05:01.522 "read": true, 00:05:01.522 "write": true, 00:05:01.522 "unmap": true, 00:05:01.522 "write_zeroes": true, 00:05:01.522 "flush": true, 00:05:01.522 "reset": true, 00:05:01.522 "compare": false, 00:05:01.522 "compare_and_write": false, 00:05:01.522 "abort": true, 00:05:01.522 "nvme_admin": false, 00:05:01.522 "nvme_io": false 00:05:01.522 }, 00:05:01.522 "memory_domains": [ 00:05:01.522 { 00:05:01.522 "dma_device_id": "system", 00:05:01.522 "dma_device_type": 1 00:05:01.522 }, 00:05:01.522 { 00:05:01.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.522 "dma_device_type": 2 00:05:01.522 } 00:05:01.522 ], 00:05:01.522 "driver_specific": {} 00:05:01.522 }, 00:05:01.522 { 00:05:01.522 "name": "Passthru0", 00:05:01.522 "aliases": [ 00:05:01.522 "eef7fe9a-aa06-5580-89f3-fe0e57b3a440" 00:05:01.522 ], 00:05:01.522 "product_name": "passthru", 00:05:01.522 "block_size": 512, 00:05:01.522 "num_blocks": 16384, 00:05:01.522 "uuid": "eef7fe9a-aa06-5580-89f3-fe0e57b3a440", 00:05:01.522 "assigned_rate_limits": { 00:05:01.522 "rw_ios_per_sec": 0, 00:05:01.522 "rw_mbytes_per_sec": 0, 00:05:01.522 "r_mbytes_per_sec": 0, 00:05:01.522 "w_mbytes_per_sec": 0 00:05:01.522 }, 00:05:01.522 "claimed": false, 00:05:01.522 "zoned": false, 00:05:01.522 "supported_io_types": { 00:05:01.522 "read": true, 00:05:01.522 "write": true, 00:05:01.522 "unmap": true, 00:05:01.522 "write_zeroes": true, 00:05:01.522 "flush": true, 00:05:01.522 "reset": true, 00:05:01.522 "compare": false, 00:05:01.522 "compare_and_write": false, 00:05:01.522 "abort": true, 00:05:01.522 "nvme_admin": false, 00:05:01.522 "nvme_io": false 00:05:01.522 }, 00:05:01.522 "memory_domains": [ 00:05:01.522 { 00:05:01.522 "dma_device_id": "system", 00:05:01.522 "dma_device_type": 1 00:05:01.522 }, 00:05:01.522 { 00:05:01.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.522 "dma_device_type": 2 00:05:01.522 } 00:05:01.522 ], 00:05:01.522 "driver_specific": { 00:05:01.522 "passthru": { 00:05:01.522 "name": "Passthru0", 00:05:01.522 "base_bdev_name": "Malloc2" 00:05:01.522 } 00:05:01.522 } 00:05:01.522 } 00:05:01.522 ]' 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.522 00:05:01.522 real 0m0.290s 00:05:01.522 user 0m0.189s 00:05:01.522 sys 0m0.040s 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:01.522 10:30:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.522 ************************************ 00:05:01.522 END TEST rpc_daemon_integrity 00:05:01.522 ************************************ 00:05:01.784 10:30:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.784 10:30:25 rpc -- rpc/rpc.sh@84 -- # killprocess 613231 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@949 -- # '[' -z 613231 ']' 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@953 -- # kill -0 613231 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@954 -- # uname 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 613231 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 613231' 00:05:01.784 killing process with pid 613231 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@968 -- # kill 613231 00:05:01.784 10:30:25 rpc -- common/autotest_common.sh@973 -- # wait 613231 00:05:02.045 00:05:02.045 real 0m2.437s 00:05:02.045 user 0m3.186s 00:05:02.045 sys 0m0.687s 00:05:02.045 10:30:26 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:02.045 10:30:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.045 ************************************ 00:05:02.045 END TEST rpc 00:05:02.045 ************************************ 00:05:02.045 10:30:26 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:02.045 10:30:26 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:02.045 10:30:26 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:02.045 10:30:26 -- common/autotest_common.sh@10 -- # set +x 00:05:02.045 ************************************ 00:05:02.045 START TEST skip_rpc 00:05:02.045 ************************************ 00:05:02.045 10:30:26 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:02.045 * Looking for test storage... 00:05:02.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.045 10:30:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:02.045 10:30:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:02.045 10:30:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:02.045 10:30:26 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:02.045 10:30:26 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:02.045 10:30:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.045 ************************************ 00:05:02.045 START TEST skip_rpc 00:05:02.045 ************************************ 00:05:02.045 10:30:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:02.045 10:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=613942 00:05:02.045 10:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.045 10:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:02.045 10:30:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.306 [2024-06-10 10:30:26.356852] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:02.306 [2024-06-10 10:30:26.356906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613942 ] 00:05:02.306 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.306 [2024-06-10 10:30:26.419914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.306 [2024-06-10 10:30:26.485118] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 613942 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 613942 ']' 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 613942 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 613942 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:07.594 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 613942' 00:05:07.595 killing process with pid 613942 00:05:07.595 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 613942 00:05:07.595 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 613942 00:05:07.595 00:05:07.595 real 0m5.277s 00:05:07.595 user 0m5.079s 00:05:07.595 sys 0m0.230s 00:05:07.595 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:07.595 10:30:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.595 ************************************ 00:05:07.595 END TEST skip_rpc 00:05:07.595 ************************************ 00:05:07.595 10:30:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:07.595 10:30:31 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:07.595 10:30:31 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:07.595 10:30:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.595 ************************************ 00:05:07.595 START TEST skip_rpc_with_json 00:05:07.595 ************************************ 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=614978 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 614978 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 614978 ']' 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:07.595 10:30:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.595 [2024-06-10 10:30:31.709358] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:07.595 [2024-06-10 10:30:31.709409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614978 ] 00:05:07.595 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.595 [2024-06-10 10:30:31.769504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.595 [2024-06-10 10:30:31.836201] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.538 [2024-06-10 10:30:32.467952] nvmf_rpc.c:2548:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:08.538 request: 00:05:08.538 { 00:05:08.538 "trtype": "tcp", 00:05:08.538 "method": "nvmf_get_transports", 00:05:08.538 "req_id": 1 00:05:08.538 } 00:05:08.538 Got JSON-RPC error response 00:05:08.538 response: 00:05:08.538 { 00:05:08.538 "code": -19, 00:05:08.538 "message": "No such device" 00:05:08.538 } 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.538 [2024-06-10 10:30:32.480068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:08.538 10:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.538 { 00:05:08.538 "subsystems": [ 00:05:08.538 { 00:05:08.538 "subsystem": "vfio_user_target", 00:05:08.538 "config": null 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "keyring", 00:05:08.538 "config": [] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "iobuf", 00:05:08.538 "config": [ 00:05:08.538 { 00:05:08.538 "method": "iobuf_set_options", 00:05:08.538 "params": { 00:05:08.538 "small_pool_count": 8192, 00:05:08.538 "large_pool_count": 1024, 00:05:08.538 "small_bufsize": 8192, 00:05:08.538 "large_bufsize": 135168 00:05:08.538 } 00:05:08.538 } 00:05:08.538 ] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "sock", 00:05:08.538 "config": [ 00:05:08.538 { 00:05:08.538 "method": "sock_set_default_impl", 00:05:08.538 "params": { 00:05:08.538 "impl_name": "posix" 00:05:08.538 } 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "method": "sock_impl_set_options", 00:05:08.538 "params": { 00:05:08.538 "impl_name": "ssl", 00:05:08.538 "recv_buf_size": 4096, 00:05:08.538 "send_buf_size": 4096, 00:05:08.538 "enable_recv_pipe": true, 00:05:08.538 "enable_quickack": false, 00:05:08.538 "enable_placement_id": 0, 00:05:08.538 "enable_zerocopy_send_server": true, 00:05:08.538 "enable_zerocopy_send_client": false, 00:05:08.538 "zerocopy_threshold": 0, 00:05:08.538 "tls_version": 0, 00:05:08.538 "enable_ktls": false 00:05:08.538 } 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "method": "sock_impl_set_options", 00:05:08.538 "params": { 00:05:08.538 "impl_name": "posix", 00:05:08.538 "recv_buf_size": 2097152, 00:05:08.538 "send_buf_size": 2097152, 00:05:08.538 "enable_recv_pipe": true, 00:05:08.538 "enable_quickack": false, 00:05:08.538 "enable_placement_id": 0, 00:05:08.538 "enable_zerocopy_send_server": true, 00:05:08.538 "enable_zerocopy_send_client": false, 00:05:08.538 "zerocopy_threshold": 0, 00:05:08.538 "tls_version": 0, 00:05:08.538 "enable_ktls": false 00:05:08.538 } 00:05:08.538 } 00:05:08.538 ] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "vmd", 00:05:08.538 "config": [] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "accel", 00:05:08.538 "config": [ 00:05:08.538 { 00:05:08.538 "method": "accel_set_options", 00:05:08.538 "params": { 00:05:08.538 "small_cache_size": 128, 00:05:08.538 "large_cache_size": 16, 00:05:08.538 "task_count": 2048, 00:05:08.538 "sequence_count": 2048, 00:05:08.538 "buf_count": 2048 00:05:08.538 } 00:05:08.538 } 00:05:08.538 ] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "bdev", 00:05:08.538 "config": [ 00:05:08.538 { 00:05:08.538 "method": "bdev_set_options", 00:05:08.538 "params": { 00:05:08.538 "bdev_io_pool_size": 65535, 00:05:08.538 "bdev_io_cache_size": 256, 00:05:08.538 "bdev_auto_examine": true, 00:05:08.538 "iobuf_small_cache_size": 128, 00:05:08.538 "iobuf_large_cache_size": 16 00:05:08.538 } 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "method": "bdev_raid_set_options", 00:05:08.538 "params": { 00:05:08.538 "process_window_size_kb": 1024 00:05:08.538 } 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "method": "bdev_iscsi_set_options", 00:05:08.538 "params": { 00:05:08.538 "timeout_sec": 30 00:05:08.538 } 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "method": "bdev_nvme_set_options", 00:05:08.538 "params": { 00:05:08.538 "action_on_timeout": "none", 00:05:08.538 "timeout_us": 0, 00:05:08.538 "timeout_admin_us": 0, 00:05:08.538 "keep_alive_timeout_ms": 10000, 00:05:08.538 "arbitration_burst": 0, 00:05:08.538 "low_priority_weight": 0, 00:05:08.538 "medium_priority_weight": 0, 00:05:08.538 "high_priority_weight": 0, 00:05:08.538 "nvme_adminq_poll_period_us": 10000, 00:05:08.538 "nvme_ioq_poll_period_us": 0, 00:05:08.538 "io_queue_requests": 0, 00:05:08.538 "delay_cmd_submit": true, 00:05:08.538 "transport_retry_count": 4, 00:05:08.538 "bdev_retry_count": 3, 00:05:08.538 "transport_ack_timeout": 0, 00:05:08.538 "ctrlr_loss_timeout_sec": 0, 00:05:08.538 "reconnect_delay_sec": 0, 00:05:08.538 "fast_io_fail_timeout_sec": 0, 00:05:08.538 "disable_auto_failback": false, 00:05:08.538 "generate_uuids": false, 00:05:08.538 "transport_tos": 0, 00:05:08.538 "nvme_error_stat": false, 00:05:08.538 "rdma_srq_size": 0, 00:05:08.538 "io_path_stat": false, 00:05:08.538 "allow_accel_sequence": false, 00:05:08.538 "rdma_max_cq_size": 0, 00:05:08.538 "rdma_cm_event_timeout_ms": 0, 00:05:08.538 "dhchap_digests": [ 00:05:08.538 "sha256", 00:05:08.538 "sha384", 00:05:08.538 "sha512" 00:05:08.538 ], 00:05:08.538 "dhchap_dhgroups": [ 00:05:08.538 "null", 00:05:08.538 "ffdhe2048", 00:05:08.538 "ffdhe3072", 00:05:08.538 "ffdhe4096", 00:05:08.538 "ffdhe6144", 00:05:08.538 "ffdhe8192" 00:05:08.538 ] 00:05:08.538 } 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "method": "bdev_nvme_set_hotplug", 00:05:08.538 "params": { 00:05:08.538 "period_us": 100000, 00:05:08.538 "enable": false 00:05:08.538 } 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "method": "bdev_wait_for_examine" 00:05:08.538 } 00:05:08.538 ] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "scsi", 00:05:08.538 "config": null 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "scheduler", 00:05:08.538 "config": [ 00:05:08.538 { 00:05:08.538 "method": "framework_set_scheduler", 00:05:08.538 "params": { 00:05:08.538 "name": "static" 00:05:08.538 } 00:05:08.538 } 00:05:08.538 ] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "vhost_scsi", 00:05:08.538 "config": [] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "vhost_blk", 00:05:08.538 "config": [] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "ublk", 00:05:08.538 "config": [] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "nbd", 00:05:08.538 "config": [] 00:05:08.538 }, 00:05:08.538 { 00:05:08.538 "subsystem": "nvmf", 00:05:08.538 "config": [ 00:05:08.538 { 00:05:08.538 "method": "nvmf_set_config", 00:05:08.538 "params": { 00:05:08.538 "discovery_filter": "match_any", 00:05:08.538 "admin_cmd_passthru": { 00:05:08.538 "identify_ctrlr": false 00:05:08.538 } 00:05:08.538 } 00:05:08.539 }, 00:05:08.539 { 00:05:08.539 "method": "nvmf_set_max_subsystems", 00:05:08.539 "params": { 00:05:08.539 "max_subsystems": 1024 00:05:08.539 } 00:05:08.539 }, 00:05:08.539 { 00:05:08.539 "method": "nvmf_set_crdt", 00:05:08.539 "params": { 00:05:08.539 "crdt1": 0, 00:05:08.539 "crdt2": 0, 00:05:08.539 "crdt3": 0 00:05:08.539 } 00:05:08.539 }, 00:05:08.539 { 00:05:08.539 "method": "nvmf_create_transport", 00:05:08.539 "params": { 00:05:08.539 "trtype": "TCP", 00:05:08.539 "max_queue_depth": 128, 00:05:08.539 "max_io_qpairs_per_ctrlr": 127, 00:05:08.539 "in_capsule_data_size": 4096, 00:05:08.539 "max_io_size": 131072, 00:05:08.539 "io_unit_size": 131072, 00:05:08.539 "max_aq_depth": 128, 00:05:08.539 "num_shared_buffers": 511, 00:05:08.539 "buf_cache_size": 4294967295, 00:05:08.539 "dif_insert_or_strip": false, 00:05:08.539 "zcopy": false, 00:05:08.539 "c2h_success": true, 00:05:08.539 "sock_priority": 0, 00:05:08.539 "abort_timeout_sec": 1, 00:05:08.539 "ack_timeout": 0, 00:05:08.539 "data_wr_pool_size": 0 00:05:08.539 } 00:05:08.539 } 00:05:08.539 ] 00:05:08.539 }, 00:05:08.539 { 00:05:08.539 "subsystem": "iscsi", 00:05:08.539 "config": [ 00:05:08.539 { 00:05:08.539 "method": "iscsi_set_options", 00:05:08.539 "params": { 00:05:08.539 "node_base": "iqn.2016-06.io.spdk", 00:05:08.539 "max_sessions": 128, 00:05:08.539 "max_connections_per_session": 2, 00:05:08.539 "max_queue_depth": 64, 00:05:08.539 "default_time2wait": 2, 00:05:08.539 "default_time2retain": 20, 00:05:08.539 "first_burst_length": 8192, 00:05:08.539 "immediate_data": true, 00:05:08.539 "allow_duplicated_isid": false, 00:05:08.539 "error_recovery_level": 0, 00:05:08.539 "nop_timeout": 60, 00:05:08.539 "nop_in_interval": 30, 00:05:08.539 "disable_chap": false, 00:05:08.539 "require_chap": false, 00:05:08.539 "mutual_chap": false, 00:05:08.539 "chap_group": 0, 00:05:08.539 "max_large_datain_per_connection": 64, 00:05:08.539 "max_r2t_per_connection": 4, 00:05:08.539 "pdu_pool_size": 36864, 00:05:08.539 "immediate_data_pool_size": 16384, 00:05:08.539 "data_out_pool_size": 2048 00:05:08.539 } 00:05:08.539 } 00:05:08.539 ] 00:05:08.539 } 00:05:08.539 ] 00:05:08.539 } 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 614978 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 614978 ']' 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 614978 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 614978 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 614978' 00:05:08.539 killing process with pid 614978 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 614978 00:05:08.539 10:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 614978 00:05:08.800 10:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=615316 00:05:08.800 10:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.800 10:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 615316 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 615316 ']' 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 615316 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 615316 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 615316' 00:05:14.085 killing process with pid 615316 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 615316 00:05:14.085 10:30:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 615316 00:05:14.085 10:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.085 10:30:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.085 00:05:14.085 real 0m6.534s 00:05:14.085 user 0m6.418s 00:05:14.085 sys 0m0.512s 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.086 ************************************ 00:05:14.086 END TEST skip_rpc_with_json 00:05:14.086 ************************************ 00:05:14.086 10:30:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:14.086 10:30:38 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.086 10:30:38 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.086 10:30:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.086 ************************************ 00:05:14.086 START TEST skip_rpc_with_delay 00:05:14.086 ************************************ 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.086 [2024-06-10 10:30:38.321227] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:14.086 [2024-06-10 10:30:38.321318] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:14.086 00:05:14.086 real 0m0.071s 00:05:14.086 user 0m0.048s 00:05:14.086 sys 0m0.022s 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:14.086 10:30:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:14.086 ************************************ 00:05:14.086 END TEST skip_rpc_with_delay 00:05:14.086 ************************************ 00:05:14.086 10:30:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:14.347 10:30:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:14.347 10:30:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:14.347 10:30:38 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.347 10:30:38 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.347 10:30:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.347 ************************************ 00:05:14.347 START TEST exit_on_failed_rpc_init 00:05:14.347 ************************************ 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=616383 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 616383 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 616383 ']' 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:14.347 10:30:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.347 [2024-06-10 10:30:38.471089] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:14.347 [2024-06-10 10:30:38.471151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616383 ] 00:05:14.347 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.347 [2024-06-10 10:30:38.536713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.347 [2024-06-10 10:30:38.613916] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.288 [2024-06-10 10:30:39.300120] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:15.288 [2024-06-10 10:30:39.300172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616710 ] 00:05:15.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.288 [2024-06-10 10:30:39.377696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.288 [2024-06-10 10:30:39.441635] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.288 [2024-06-10 10:30:39.441695] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:15.288 [2024-06-10 10:30:39.441705] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:15.288 [2024-06-10 10:30:39.441711] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 616383 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 616383 ']' 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 616383 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 616383 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 616383' 00:05:15.288 killing process with pid 616383 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 616383 00:05:15.288 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 616383 00:05:15.548 00:05:15.548 real 0m1.351s 00:05:15.548 user 0m1.571s 00:05:15.548 sys 0m0.388s 00:05:15.548 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:15.548 10:30:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.548 ************************************ 00:05:15.548 END TEST exit_on_failed_rpc_init 00:05:15.548 ************************************ 00:05:15.548 10:30:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.548 00:05:15.548 real 0m13.642s 00:05:15.548 user 0m13.271s 00:05:15.548 sys 0m1.432s 00:05:15.548 10:30:39 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:15.548 10:30:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.548 ************************************ 00:05:15.548 END TEST skip_rpc 00:05:15.548 ************************************ 00:05:15.809 10:30:39 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.809 10:30:39 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:15.809 10:30:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:15.809 10:30:39 -- common/autotest_common.sh@10 -- # set +x 00:05:15.809 ************************************ 00:05:15.809 START TEST rpc_client 00:05:15.809 ************************************ 00:05:15.809 10:30:39 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.809 * Looking for test storage... 00:05:15.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:15.809 10:30:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:15.809 OK 00:05:15.809 10:30:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:15.809 00:05:15.809 real 0m0.122s 00:05:15.809 user 0m0.060s 00:05:15.809 sys 0m0.070s 00:05:15.809 10:30:40 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:15.809 10:30:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:15.809 ************************************ 00:05:15.809 END TEST rpc_client 00:05:15.809 ************************************ 00:05:15.810 10:30:40 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.810 10:30:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:15.810 10:30:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:15.810 10:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:15.810 ************************************ 00:05:15.810 START TEST json_config 00:05:15.810 ************************************ 00:05:15.810 10:30:40 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:16.075 10:30:40 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.075 10:30:40 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.075 10:30:40 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.075 10:30:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.075 10:30:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.075 10:30:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.075 10:30:40 json_config -- paths/export.sh@5 -- # export PATH 00:05:16.075 10:30:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@47 -- # : 0 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:16.075 10:30:40 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:16.075 INFO: JSON configuration test init 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.075 10:30:40 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:16.075 10:30:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:16.075 10:30:40 json_config -- json_config/common.sh@10 -- # shift 00:05:16.075 10:30:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.075 10:30:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.075 10:30:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.075 10:30:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.075 10:30:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.075 10:30:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=616840 00:05:16.075 10:30:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.075 Waiting for target to run... 00:05:16.075 10:30:40 json_config -- json_config/common.sh@25 -- # waitforlisten 616840 /var/tmp/spdk_tgt.sock 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@830 -- # '[' -z 616840 ']' 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.075 10:30:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:16.075 10:30:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.076 [2024-06-10 10:30:40.247326] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:16.076 [2024-06-10 10:30:40.247391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616840 ] 00:05:16.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.394 [2024-06-10 10:30:40.504006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.394 [2024-06-10 10:30:40.555147] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.965 10:30:40 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:16.965 10:30:40 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:16.965 10:30:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.965 00:05:16.965 10:30:40 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:16.965 10:30:40 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:16.965 10:30:40 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:16.965 10:30:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.965 10:30:41 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:16.965 10:30:41 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:16.965 10:30:41 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:16.965 10:30:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.965 10:30:41 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.965 10:30:41 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:16.965 10:30:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:17.536 10:30:41 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:17.536 10:30:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:17.536 10:30:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:17.536 10:30:41 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:17.536 10:30:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:17.536 10:30:41 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:17.536 10:30:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:17.536 10:30:41 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.536 10:30:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.797 MallocForNvmf0 00:05:17.797 10:30:41 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.797 10:30:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.057 MallocForNvmf1 00:05:18.057 10:30:42 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.057 10:30:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.057 [2024-06-10 10:30:42.278684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.057 10:30:42 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.057 10:30:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.317 10:30:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.317 10:30:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.578 10:30:42 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.578 10:30:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.578 10:30:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.578 10:30:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.838 [2024-06-10 10:30:42.948458] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:18.838 [2024-06-10 10:30:42.948890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.838 10:30:42 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:18.838 10:30:42 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:18.838 10:30:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.838 10:30:43 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:18.838 10:30:43 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:18.838 10:30:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.838 10:30:43 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:18.838 10:30:43 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.838 10:30:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.099 MallocBdevForConfigChangeCheck 00:05:19.099 10:30:43 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:19.099 10:30:43 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:19.099 10:30:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.099 10:30:43 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:19.099 10:30:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.360 10:30:43 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:19.360 INFO: shutting down applications... 00:05:19.360 10:30:43 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:19.360 10:30:43 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:19.360 10:30:43 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:19.360 10:30:43 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.932 Calling clear_iscsi_subsystem 00:05:19.932 Calling clear_nvmf_subsystem 00:05:19.932 Calling clear_nbd_subsystem 00:05:19.932 Calling clear_ublk_subsystem 00:05:19.932 Calling clear_vhost_blk_subsystem 00:05:19.932 Calling clear_vhost_scsi_subsystem 00:05:19.932 Calling clear_bdev_subsystem 00:05:19.932 10:30:43 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:19.932 10:30:43 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:19.932 10:30:43 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:19.932 10:30:43 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.932 10:30:43 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.932 10:30:43 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:20.192 10:30:44 json_config -- json_config/json_config.sh@345 -- # break 00:05:20.192 10:30:44 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:20.192 10:30:44 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:20.192 10:30:44 json_config -- json_config/common.sh@31 -- # local app=target 00:05:20.192 10:30:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.192 10:30:44 json_config -- json_config/common.sh@35 -- # [[ -n 616840 ]] 00:05:20.192 10:30:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 616840 00:05:20.192 [2024-06-10 10:30:44.278108] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:20.192 10:30:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.192 10:30:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.192 10:30:44 json_config -- json_config/common.sh@41 -- # kill -0 616840 00:05:20.192 10:30:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.764 10:30:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.764 10:30:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.764 10:30:44 json_config -- json_config/common.sh@41 -- # kill -0 616840 00:05:20.764 10:30:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.764 10:30:44 json_config -- json_config/common.sh@43 -- # break 00:05:20.764 10:30:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.764 10:30:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.764 SPDK target shutdown done 00:05:20.764 10:30:44 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:20.764 INFO: relaunching applications... 00:05:20.764 10:30:44 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.764 10:30:44 json_config -- json_config/common.sh@9 -- # local app=target 00:05:20.764 10:30:44 json_config -- json_config/common.sh@10 -- # shift 00:05:20.764 10:30:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.764 10:30:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.764 10:30:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.764 10:30:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.764 10:30:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.764 10:30:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=617966 00:05:20.764 10:30:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.764 Waiting for target to run... 00:05:20.764 10:30:44 json_config -- json_config/common.sh@25 -- # waitforlisten 617966 /var/tmp/spdk_tgt.sock 00:05:20.764 10:30:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.764 10:30:44 json_config -- common/autotest_common.sh@830 -- # '[' -z 617966 ']' 00:05:20.764 10:30:44 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.764 10:30:44 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:20.764 10:30:44 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.764 10:30:44 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:20.764 10:30:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.764 [2024-06-10 10:30:44.841010] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:20.764 [2024-06-10 10:30:44.841064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617966 ] 00:05:20.764 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.024 [2024-06-10 10:30:45.111567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.024 [2024-06-10 10:30:45.163863] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.596 [2024-06-10 10:30:45.656371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:21.596 [2024-06-10 10:30:45.688343] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:21.596 [2024-06-10 10:30:45.688763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.596 10:30:45 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:21.596 10:30:45 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:21.596 10:30:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:21.596 00:05:21.596 10:30:45 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:21.597 10:30:45 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:21.597 INFO: Checking if target configuration is the same... 00:05:21.597 10:30:45 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.597 10:30:45 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:21.597 10:30:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.597 + '[' 2 -ne 2 ']' 00:05:21.597 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:21.597 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:21.597 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.597 +++ basename /dev/fd/62 00:05:21.597 ++ mktemp /tmp/62.XXX 00:05:21.597 + tmp_file_1=/tmp/62.tin 00:05:21.597 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.597 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.597 + tmp_file_2=/tmp/spdk_tgt_config.json.9D4 00:05:21.597 + ret=0 00:05:21.597 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.857 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.857 + diff -u /tmp/62.tin /tmp/spdk_tgt_config.json.9D4 00:05:21.857 + echo 'INFO: JSON config files are the same' 00:05:21.857 INFO: JSON config files are the same 00:05:21.857 + rm /tmp/62.tin /tmp/spdk_tgt_config.json.9D4 00:05:21.857 + exit 0 00:05:21.857 10:30:46 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:21.857 10:30:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:21.857 INFO: changing configuration and checking if this can be detected... 00:05:21.857 10:30:46 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.857 10:30:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.118 10:30:46 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:22.118 10:30:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.118 10:30:46 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.118 + '[' 2 -ne 2 ']' 00:05:22.118 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:22.118 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:22.118 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:22.118 +++ basename /dev/fd/62 00:05:22.118 ++ mktemp /tmp/62.XXX 00:05:22.118 + tmp_file_1=/tmp/62.2z7 00:05:22.118 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.118 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.118 + tmp_file_2=/tmp/spdk_tgt_config.json.epe 00:05:22.118 + ret=0 00:05:22.118 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.378 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.378 + diff -u /tmp/62.2z7 /tmp/spdk_tgt_config.json.epe 00:05:22.378 + ret=1 00:05:22.378 + echo '=== Start of file: /tmp/62.2z7 ===' 00:05:22.378 + cat /tmp/62.2z7 00:05:22.378 + echo '=== End of file: /tmp/62.2z7 ===' 00:05:22.378 + echo '' 00:05:22.378 + echo '=== Start of file: /tmp/spdk_tgt_config.json.epe ===' 00:05:22.378 + cat /tmp/spdk_tgt_config.json.epe 00:05:22.378 + echo '=== End of file: /tmp/spdk_tgt_config.json.epe ===' 00:05:22.378 + echo '' 00:05:22.378 + rm /tmp/62.2z7 /tmp/spdk_tgt_config.json.epe 00:05:22.378 + exit 1 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:22.378 INFO: configuration change detected. 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:22.378 10:30:46 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:22.378 10:30:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@317 -- # [[ -n 617966 ]] 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:22.378 10:30:46 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:22.378 10:30:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:22.378 10:30:46 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:22.378 10:30:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.378 10:30:46 json_config -- json_config/json_config.sh@323 -- # killprocess 617966 00:05:22.378 10:30:46 json_config -- common/autotest_common.sh@949 -- # '[' -z 617966 ']' 00:05:22.378 10:30:46 json_config -- common/autotest_common.sh@953 -- # kill -0 617966 00:05:22.378 10:30:46 json_config -- common/autotest_common.sh@954 -- # uname 00:05:22.638 10:30:46 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:22.638 10:30:46 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 617966 00:05:22.638 10:30:46 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:22.638 10:30:46 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:22.639 10:30:46 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 617966' 00:05:22.639 killing process with pid 617966 00:05:22.639 10:30:46 json_config -- common/autotest_common.sh@968 -- # kill 617966 00:05:22.639 [2024-06-10 10:30:46.716022] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:22.639 10:30:46 json_config -- common/autotest_common.sh@973 -- # wait 617966 00:05:22.900 10:30:47 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.900 10:30:47 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:22.900 10:30:47 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:22.900 10:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.900 10:30:47 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:22.900 10:30:47 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:22.900 INFO: Success 00:05:22.900 00:05:22.900 real 0m6.968s 00:05:22.900 user 0m8.515s 00:05:22.900 sys 0m1.647s 00:05:22.900 10:30:47 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:22.900 10:30:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.900 ************************************ 00:05:22.900 END TEST json_config 00:05:22.900 ************************************ 00:05:22.900 10:30:47 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:22.900 10:30:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:22.900 10:30:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:22.900 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:22.900 ************************************ 00:05:22.900 START TEST json_config_extra_key 00:05:22.900 ************************************ 00:05:22.900 10:30:47 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.163 10:30:47 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.163 10:30:47 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.163 10:30:47 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.163 10:30:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.163 10:30:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.163 10:30:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.163 10:30:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:23.163 10:30:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:23.163 10:30:47 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.163 INFO: launching applications... 00:05:23.163 10:30:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=618592 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.163 Waiting for target to run... 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 618592 /var/tmp/spdk_tgt.sock 00:05:23.163 10:30:47 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 618592 ']' 00:05:23.163 10:30:47 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.163 10:30:47 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.163 10:30:47 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:23.163 10:30:47 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.163 10:30:47 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:23.163 10:30:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.163 [2024-06-10 10:30:47.282978] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:23.163 [2024-06-10 10:30:47.283048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618592 ] 00:05:23.163 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.424 [2024-06-10 10:30:47.648115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.424 [2024-06-10 10:30:47.700909] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.994 10:30:48 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:23.994 10:30:48 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:23.994 10:30:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:23.994 00:05:23.994 10:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:23.994 INFO: shutting down applications... 00:05:23.994 10:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:23.994 10:30:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:23.994 10:30:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.994 10:30:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 618592 ]] 00:05:23.994 10:30:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 618592 00:05:23.994 10:30:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.994 10:30:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.994 10:30:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 618592 00:05:23.994 10:30:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.566 10:30:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.566 10:30:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.566 10:30:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 618592 00:05:24.566 10:30:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.566 10:30:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:24.566 10:30:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.566 10:30:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.566 SPDK target shutdown done 00:05:24.566 10:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:24.566 Success 00:05:24.566 00:05:24.566 real 0m1.448s 00:05:24.567 user 0m1.013s 00:05:24.567 sys 0m0.462s 00:05:24.567 10:30:48 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:24.567 10:30:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.567 ************************************ 00:05:24.567 END TEST json_config_extra_key 00:05:24.567 ************************************ 00:05:24.567 10:30:48 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.567 10:30:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:24.567 10:30:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:24.567 10:30:48 -- common/autotest_common.sh@10 -- # set +x 00:05:24.567 ************************************ 00:05:24.567 START TEST alias_rpc 00:05:24.567 ************************************ 00:05:24.567 10:30:48 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.567 * Looking for test storage... 00:05:24.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:24.567 10:30:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:24.567 10:30:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=618863 00:05:24.567 10:30:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 618863 00:05:24.567 10:30:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.567 10:30:48 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 618863 ']' 00:05:24.567 10:30:48 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.567 10:30:48 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:24.567 10:30:48 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.567 10:30:48 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:24.567 10:30:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.567 [2024-06-10 10:30:48.805980] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:24.567 [2024-06-10 10:30:48.806048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618863 ] 00:05:24.567 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.828 [2024-06-10 10:30:48.873204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.828 [2024-06-10 10:30:48.949855] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.398 10:30:49 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:25.399 10:30:49 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:25.399 10:30:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:25.658 10:30:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 618863 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 618863 ']' 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 618863 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 618863 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 618863' 00:05:25.658 killing process with pid 618863 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@968 -- # kill 618863 00:05:25.658 10:30:49 alias_rpc -- common/autotest_common.sh@973 -- # wait 618863 00:05:25.919 00:05:25.919 real 0m1.392s 00:05:25.919 user 0m1.528s 00:05:25.919 sys 0m0.376s 00:05:25.919 10:30:50 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:25.919 10:30:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.919 ************************************ 00:05:25.919 END TEST alias_rpc 00:05:25.919 ************************************ 00:05:25.919 10:30:50 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:25.919 10:30:50 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.919 10:30:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:25.919 10:30:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:25.919 10:30:50 -- common/autotest_common.sh@10 -- # set +x 00:05:25.919 ************************************ 00:05:25.919 START TEST spdkcli_tcp 00:05:25.919 ************************************ 00:05:25.919 10:30:50 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.919 * Looking for test storage... 00:05:25.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:25.919 10:30:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:25.919 10:30:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:25.919 10:30:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:26.181 10:30:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.181 10:30:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.181 10:30:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.181 10:30:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.181 10:30:50 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:26.181 10:30:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.181 10:30:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=619196 00:05:26.181 10:30:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 619196 00:05:26.181 10:30:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.181 10:30:50 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 619196 ']' 00:05:26.181 10:30:50 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.181 10:30:50 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:26.181 10:30:50 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.181 10:30:50 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:26.181 10:30:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.181 [2024-06-10 10:30:50.268945] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:26.181 [2024-06-10 10:30:50.268996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619196 ] 00:05:26.181 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.181 [2024-06-10 10:30:50.339324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.181 [2024-06-10 10:30:50.408263] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.181 [2024-06-10 10:30:50.408428] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.753 10:30:51 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:26.753 10:30:51 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:26.753 10:30:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=619526 00:05:26.753 10:30:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.753 10:30:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:27.014 [ 00:05:27.014 "bdev_malloc_delete", 00:05:27.014 "bdev_malloc_create", 00:05:27.014 "bdev_null_resize", 00:05:27.014 "bdev_null_delete", 00:05:27.014 "bdev_null_create", 00:05:27.014 "bdev_nvme_cuse_unregister", 00:05:27.014 "bdev_nvme_cuse_register", 00:05:27.014 "bdev_opal_new_user", 00:05:27.014 "bdev_opal_set_lock_state", 00:05:27.014 "bdev_opal_delete", 00:05:27.014 "bdev_opal_get_info", 00:05:27.014 "bdev_opal_create", 00:05:27.014 "bdev_nvme_opal_revert", 00:05:27.014 "bdev_nvme_opal_init", 00:05:27.014 "bdev_nvme_send_cmd", 00:05:27.014 "bdev_nvme_get_path_iostat", 00:05:27.014 "bdev_nvme_get_mdns_discovery_info", 00:05:27.014 "bdev_nvme_stop_mdns_discovery", 00:05:27.014 "bdev_nvme_start_mdns_discovery", 00:05:27.014 "bdev_nvme_set_multipath_policy", 00:05:27.014 "bdev_nvme_set_preferred_path", 00:05:27.014 "bdev_nvme_get_io_paths", 00:05:27.014 "bdev_nvme_remove_error_injection", 00:05:27.014 "bdev_nvme_add_error_injection", 00:05:27.014 "bdev_nvme_get_discovery_info", 00:05:27.014 "bdev_nvme_stop_discovery", 00:05:27.014 "bdev_nvme_start_discovery", 00:05:27.014 "bdev_nvme_get_controller_health_info", 00:05:27.014 "bdev_nvme_disable_controller", 00:05:27.014 "bdev_nvme_enable_controller", 00:05:27.014 "bdev_nvme_reset_controller", 00:05:27.014 "bdev_nvme_get_transport_statistics", 00:05:27.014 "bdev_nvme_apply_firmware", 00:05:27.014 "bdev_nvme_detach_controller", 00:05:27.014 "bdev_nvme_get_controllers", 00:05:27.014 "bdev_nvme_attach_controller", 00:05:27.014 "bdev_nvme_set_hotplug", 00:05:27.014 "bdev_nvme_set_options", 00:05:27.014 "bdev_passthru_delete", 00:05:27.014 "bdev_passthru_create", 00:05:27.014 "bdev_lvol_set_parent_bdev", 00:05:27.014 "bdev_lvol_set_parent", 00:05:27.014 "bdev_lvol_check_shallow_copy", 00:05:27.014 "bdev_lvol_start_shallow_copy", 00:05:27.014 "bdev_lvol_grow_lvstore", 00:05:27.014 "bdev_lvol_get_lvols", 00:05:27.014 "bdev_lvol_get_lvstores", 00:05:27.014 "bdev_lvol_delete", 00:05:27.014 "bdev_lvol_set_read_only", 00:05:27.014 "bdev_lvol_resize", 00:05:27.014 "bdev_lvol_decouple_parent", 00:05:27.014 "bdev_lvol_inflate", 00:05:27.014 "bdev_lvol_rename", 00:05:27.014 "bdev_lvol_clone_bdev", 00:05:27.014 "bdev_lvol_clone", 00:05:27.014 "bdev_lvol_snapshot", 00:05:27.014 "bdev_lvol_create", 00:05:27.014 "bdev_lvol_delete_lvstore", 00:05:27.014 "bdev_lvol_rename_lvstore", 00:05:27.014 "bdev_lvol_create_lvstore", 00:05:27.014 "bdev_raid_set_options", 00:05:27.014 "bdev_raid_remove_base_bdev", 00:05:27.014 "bdev_raid_add_base_bdev", 00:05:27.014 "bdev_raid_delete", 00:05:27.014 "bdev_raid_create", 00:05:27.014 "bdev_raid_get_bdevs", 00:05:27.014 "bdev_error_inject_error", 00:05:27.014 "bdev_error_delete", 00:05:27.014 "bdev_error_create", 00:05:27.014 "bdev_split_delete", 00:05:27.014 "bdev_split_create", 00:05:27.014 "bdev_delay_delete", 00:05:27.014 "bdev_delay_create", 00:05:27.014 "bdev_delay_update_latency", 00:05:27.014 "bdev_zone_block_delete", 00:05:27.014 "bdev_zone_block_create", 00:05:27.014 "blobfs_create", 00:05:27.014 "blobfs_detect", 00:05:27.014 "blobfs_set_cache_size", 00:05:27.014 "bdev_aio_delete", 00:05:27.014 "bdev_aio_rescan", 00:05:27.014 "bdev_aio_create", 00:05:27.014 "bdev_ftl_set_property", 00:05:27.014 "bdev_ftl_get_properties", 00:05:27.014 "bdev_ftl_get_stats", 00:05:27.014 "bdev_ftl_unmap", 00:05:27.014 "bdev_ftl_unload", 00:05:27.014 "bdev_ftl_delete", 00:05:27.014 "bdev_ftl_load", 00:05:27.014 "bdev_ftl_create", 00:05:27.014 "bdev_virtio_attach_controller", 00:05:27.014 "bdev_virtio_scsi_get_devices", 00:05:27.014 "bdev_virtio_detach_controller", 00:05:27.014 "bdev_virtio_blk_set_hotplug", 00:05:27.014 "bdev_iscsi_delete", 00:05:27.015 "bdev_iscsi_create", 00:05:27.015 "bdev_iscsi_set_options", 00:05:27.015 "accel_error_inject_error", 00:05:27.015 "ioat_scan_accel_module", 00:05:27.015 "dsa_scan_accel_module", 00:05:27.015 "iaa_scan_accel_module", 00:05:27.015 "vfu_virtio_create_scsi_endpoint", 00:05:27.015 "vfu_virtio_scsi_remove_target", 00:05:27.015 "vfu_virtio_scsi_add_target", 00:05:27.015 "vfu_virtio_create_blk_endpoint", 00:05:27.015 "vfu_virtio_delete_endpoint", 00:05:27.015 "keyring_file_remove_key", 00:05:27.015 "keyring_file_add_key", 00:05:27.015 "keyring_linux_set_options", 00:05:27.015 "iscsi_get_histogram", 00:05:27.015 "iscsi_enable_histogram", 00:05:27.015 "iscsi_set_options", 00:05:27.015 "iscsi_get_auth_groups", 00:05:27.015 "iscsi_auth_group_remove_secret", 00:05:27.015 "iscsi_auth_group_add_secret", 00:05:27.015 "iscsi_delete_auth_group", 00:05:27.015 "iscsi_create_auth_group", 00:05:27.015 "iscsi_set_discovery_auth", 00:05:27.015 "iscsi_get_options", 00:05:27.015 "iscsi_target_node_request_logout", 00:05:27.015 "iscsi_target_node_set_redirect", 00:05:27.015 "iscsi_target_node_set_auth", 00:05:27.015 "iscsi_target_node_add_lun", 00:05:27.015 "iscsi_get_stats", 00:05:27.015 "iscsi_get_connections", 00:05:27.015 "iscsi_portal_group_set_auth", 00:05:27.015 "iscsi_start_portal_group", 00:05:27.015 "iscsi_delete_portal_group", 00:05:27.015 "iscsi_create_portal_group", 00:05:27.015 "iscsi_get_portal_groups", 00:05:27.015 "iscsi_delete_target_node", 00:05:27.015 "iscsi_target_node_remove_pg_ig_maps", 00:05:27.015 "iscsi_target_node_add_pg_ig_maps", 00:05:27.015 "iscsi_create_target_node", 00:05:27.015 "iscsi_get_target_nodes", 00:05:27.015 "iscsi_delete_initiator_group", 00:05:27.015 "iscsi_initiator_group_remove_initiators", 00:05:27.015 "iscsi_initiator_group_add_initiators", 00:05:27.015 "iscsi_create_initiator_group", 00:05:27.015 "iscsi_get_initiator_groups", 00:05:27.015 "nvmf_set_crdt", 00:05:27.015 "nvmf_set_config", 00:05:27.015 "nvmf_set_max_subsystems", 00:05:27.015 "nvmf_stop_mdns_prr", 00:05:27.015 "nvmf_publish_mdns_prr", 00:05:27.015 "nvmf_subsystem_get_listeners", 00:05:27.015 "nvmf_subsystem_get_qpairs", 00:05:27.015 "nvmf_subsystem_get_controllers", 00:05:27.015 "nvmf_get_stats", 00:05:27.015 "nvmf_get_transports", 00:05:27.015 "nvmf_create_transport", 00:05:27.015 "nvmf_get_targets", 00:05:27.015 "nvmf_delete_target", 00:05:27.015 "nvmf_create_target", 00:05:27.015 "nvmf_subsystem_allow_any_host", 00:05:27.015 "nvmf_subsystem_remove_host", 00:05:27.015 "nvmf_subsystem_add_host", 00:05:27.015 "nvmf_ns_remove_host", 00:05:27.015 "nvmf_ns_add_host", 00:05:27.015 "nvmf_subsystem_remove_ns", 00:05:27.015 "nvmf_subsystem_add_ns", 00:05:27.015 "nvmf_subsystem_listener_set_ana_state", 00:05:27.015 "nvmf_discovery_get_referrals", 00:05:27.015 "nvmf_discovery_remove_referral", 00:05:27.015 "nvmf_discovery_add_referral", 00:05:27.015 "nvmf_subsystem_remove_listener", 00:05:27.015 "nvmf_subsystem_add_listener", 00:05:27.015 "nvmf_delete_subsystem", 00:05:27.015 "nvmf_create_subsystem", 00:05:27.015 "nvmf_get_subsystems", 00:05:27.015 "env_dpdk_get_mem_stats", 00:05:27.015 "nbd_get_disks", 00:05:27.015 "nbd_stop_disk", 00:05:27.015 "nbd_start_disk", 00:05:27.015 "ublk_recover_disk", 00:05:27.015 "ublk_get_disks", 00:05:27.015 "ublk_stop_disk", 00:05:27.015 "ublk_start_disk", 00:05:27.015 "ublk_destroy_target", 00:05:27.015 "ublk_create_target", 00:05:27.015 "virtio_blk_create_transport", 00:05:27.015 "virtio_blk_get_transports", 00:05:27.015 "vhost_controller_set_coalescing", 00:05:27.015 "vhost_get_controllers", 00:05:27.015 "vhost_delete_controller", 00:05:27.015 "vhost_create_blk_controller", 00:05:27.015 "vhost_scsi_controller_remove_target", 00:05:27.015 "vhost_scsi_controller_add_target", 00:05:27.015 "vhost_start_scsi_controller", 00:05:27.015 "vhost_create_scsi_controller", 00:05:27.015 "thread_set_cpumask", 00:05:27.015 "framework_get_scheduler", 00:05:27.015 "framework_set_scheduler", 00:05:27.015 "framework_get_reactors", 00:05:27.015 "thread_get_io_channels", 00:05:27.015 "thread_get_pollers", 00:05:27.015 "thread_get_stats", 00:05:27.015 "framework_monitor_context_switch", 00:05:27.015 "spdk_kill_instance", 00:05:27.015 "log_enable_timestamps", 00:05:27.015 "log_get_flags", 00:05:27.015 "log_clear_flag", 00:05:27.015 "log_set_flag", 00:05:27.015 "log_get_level", 00:05:27.015 "log_set_level", 00:05:27.015 "log_get_print_level", 00:05:27.015 "log_set_print_level", 00:05:27.015 "framework_enable_cpumask_locks", 00:05:27.015 "framework_disable_cpumask_locks", 00:05:27.015 "framework_wait_init", 00:05:27.015 "framework_start_init", 00:05:27.015 "scsi_get_devices", 00:05:27.015 "bdev_get_histogram", 00:05:27.015 "bdev_enable_histogram", 00:05:27.015 "bdev_set_qos_limit", 00:05:27.015 "bdev_set_qd_sampling_period", 00:05:27.015 "bdev_get_bdevs", 00:05:27.015 "bdev_reset_iostat", 00:05:27.015 "bdev_get_iostat", 00:05:27.015 "bdev_examine", 00:05:27.015 "bdev_wait_for_examine", 00:05:27.015 "bdev_set_options", 00:05:27.015 "notify_get_notifications", 00:05:27.015 "notify_get_types", 00:05:27.015 "accel_get_stats", 00:05:27.015 "accel_set_options", 00:05:27.015 "accel_set_driver", 00:05:27.015 "accel_crypto_key_destroy", 00:05:27.015 "accel_crypto_keys_get", 00:05:27.015 "accel_crypto_key_create", 00:05:27.015 "accel_assign_opc", 00:05:27.015 "accel_get_module_info", 00:05:27.015 "accel_get_opc_assignments", 00:05:27.015 "vmd_rescan", 00:05:27.015 "vmd_remove_device", 00:05:27.015 "vmd_enable", 00:05:27.015 "sock_get_default_impl", 00:05:27.015 "sock_set_default_impl", 00:05:27.015 "sock_impl_set_options", 00:05:27.015 "sock_impl_get_options", 00:05:27.015 "iobuf_get_stats", 00:05:27.015 "iobuf_set_options", 00:05:27.015 "keyring_get_keys", 00:05:27.015 "framework_get_pci_devices", 00:05:27.015 "framework_get_config", 00:05:27.015 "framework_get_subsystems", 00:05:27.015 "vfu_tgt_set_base_path", 00:05:27.015 "trace_get_info", 00:05:27.015 "trace_get_tpoint_group_mask", 00:05:27.015 "trace_disable_tpoint_group", 00:05:27.015 "trace_enable_tpoint_group", 00:05:27.015 "trace_clear_tpoint_mask", 00:05:27.015 "trace_set_tpoint_mask", 00:05:27.015 "spdk_get_version", 00:05:27.015 "rpc_get_methods" 00:05:27.015 ] 00:05:27.015 10:30:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.015 10:30:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:27.015 10:30:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 619196 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 619196 ']' 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 619196 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 619196 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 619196' 00:05:27.015 killing process with pid 619196 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 619196 00:05:27.015 10:30:51 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 619196 00:05:27.276 00:05:27.276 real 0m1.383s 00:05:27.276 user 0m2.532s 00:05:27.276 sys 0m0.400s 00:05:27.276 10:30:51 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:27.276 10:30:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.276 ************************************ 00:05:27.276 END TEST spdkcli_tcp 00:05:27.276 ************************************ 00:05:27.276 10:30:51 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.276 10:30:51 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:27.276 10:30:51 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:27.276 10:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:27.276 ************************************ 00:05:27.276 START TEST dpdk_mem_utility 00:05:27.276 ************************************ 00:05:27.537 10:30:51 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.537 * Looking for test storage... 00:05:27.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:27.537 10:30:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.537 10:30:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=619604 00:05:27.537 10:30:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 619604 00:05:27.537 10:30:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.537 10:30:51 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 619604 ']' 00:05:27.537 10:30:51 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.537 10:30:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:27.537 10:30:51 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.537 10:30:51 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:27.537 10:30:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.537 [2024-06-10 10:30:51.719766] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:27.537 [2024-06-10 10:30:51.719831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619604 ] 00:05:27.537 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.538 [2024-06-10 10:30:51.784941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.798 [2024-06-10 10:30:51.857148] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:28.380 10:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.380 10:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.380 { 00:05:28.380 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.380 } 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.380 10:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.380 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:28.380 1 heaps totaling size 814.000000 MiB 00:05:28.380 size: 814.000000 MiB heap id: 0 00:05:28.380 end heaps---------- 00:05:28.380 8 mempools totaling size 598.116089 MiB 00:05:28.380 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.380 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.380 size: 84.521057 MiB name: bdev_io_619604 00:05:28.380 size: 51.011292 MiB name: evtpool_619604 00:05:28.380 size: 50.003479 MiB name: msgpool_619604 00:05:28.380 size: 21.763794 MiB name: PDU_Pool 00:05:28.380 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.380 size: 0.026123 MiB name: Session_Pool 00:05:28.380 end mempools------- 00:05:28.380 6 memzones totaling size 4.142822 MiB 00:05:28.380 size: 1.000366 MiB name: RG_ring_0_619604 00:05:28.380 size: 1.000366 MiB name: RG_ring_1_619604 00:05:28.380 size: 1.000366 MiB name: RG_ring_4_619604 00:05:28.380 size: 1.000366 MiB name: RG_ring_5_619604 00:05:28.380 size: 0.125366 MiB name: RG_ring_2_619604 00:05:28.380 size: 0.015991 MiB name: RG_ring_3_619604 00:05:28.380 end memzones------- 00:05:28.380 10:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.380 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:28.380 list of free elements. size: 12.519348 MiB 00:05:28.380 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:28.380 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:28.380 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:28.380 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:28.380 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:28.380 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:28.380 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:28.380 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:28.380 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:28.380 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:28.380 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:28.380 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:28.380 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:28.380 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:28.380 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:28.380 list of standard malloc elements. size: 199.218079 MiB 00:05:28.380 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:28.380 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:28.380 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:28.380 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:28.380 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:28.380 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:28.380 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:28.380 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:28.380 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:28.380 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:28.380 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:28.380 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:28.380 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:28.380 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:28.380 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:28.380 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:28.380 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:28.380 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:28.380 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:28.380 list of memzone associated elements. size: 602.262573 MiB 00:05:28.380 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:28.380 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.380 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:28.380 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.380 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:28.380 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_619604_0 00:05:28.380 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:28.380 associated memzone info: size: 48.002930 MiB name: MP_evtpool_619604_0 00:05:28.380 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:28.380 associated memzone info: size: 48.002930 MiB name: MP_msgpool_619604_0 00:05:28.380 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:28.380 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.380 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:28.380 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.380 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:28.380 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_619604 00:05:28.380 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:28.380 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_619604 00:05:28.380 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:28.380 associated memzone info: size: 1.007996 MiB name: MP_evtpool_619604 00:05:28.380 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:28.380 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.380 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:28.380 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.380 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:28.380 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.380 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:28.380 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.380 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:28.380 associated memzone info: size: 1.000366 MiB name: RG_ring_0_619604 00:05:28.380 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:28.380 associated memzone info: size: 1.000366 MiB name: RG_ring_1_619604 00:05:28.380 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:28.380 associated memzone info: size: 1.000366 MiB name: RG_ring_4_619604 00:05:28.380 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:28.380 associated memzone info: size: 1.000366 MiB name: RG_ring_5_619604 00:05:28.380 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:28.380 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_619604 00:05:28.380 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:28.380 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.380 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:28.380 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.380 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:28.380 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.380 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:28.380 associated memzone info: size: 0.125366 MiB name: RG_ring_2_619604 00:05:28.380 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:28.380 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.380 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:28.380 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.380 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:28.380 associated memzone info: size: 0.015991 MiB name: RG_ring_3_619604 00:05:28.380 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:28.380 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.380 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:28.380 associated memzone info: size: 0.000183 MiB name: MP_msgpool_619604 00:05:28.380 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:28.380 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_619604 00:05:28.380 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:28.380 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.380 10:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.380 10:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 619604 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 619604 ']' 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 619604 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 619604 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 619604' 00:05:28.380 killing process with pid 619604 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 619604 00:05:28.380 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 619604 00:05:28.686 00:05:28.686 real 0m1.283s 00:05:28.686 user 0m1.365s 00:05:28.686 sys 0m0.365s 00:05:28.686 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.686 10:30:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.686 ************************************ 00:05:28.686 END TEST dpdk_mem_utility 00:05:28.686 ************************************ 00:05:28.686 10:30:52 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.686 10:30:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.686 10:30:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.686 10:30:52 -- common/autotest_common.sh@10 -- # set +x 00:05:28.686 ************************************ 00:05:28.686 START TEST event 00:05:28.686 ************************************ 00:05:28.686 10:30:52 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.947 * Looking for test storage... 00:05:28.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.947 10:30:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.947 10:30:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.947 10:30:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.947 10:30:53 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:28.947 10:30:53 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.947 10:30:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.947 ************************************ 00:05:28.947 START TEST event_perf 00:05:28.947 ************************************ 00:05:28.947 10:30:53 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.947 Running I/O for 1 seconds...[2024-06-10 10:30:53.076609] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:28.947 [2024-06-10 10:30:53.076711] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619992 ] 00:05:28.947 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.947 [2024-06-10 10:30:53.153791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.947 [2024-06-10 10:30:53.232408] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.947 [2024-06-10 10:30:53.232494] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.947 [2024-06-10 10:30:53.232651] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.947 [2024-06-10 10:30:53.232651] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.333 Running I/O for 1 seconds... 00:05:30.333 lcore 0: 172024 00:05:30.333 lcore 1: 172023 00:05:30.333 lcore 2: 172023 00:05:30.333 lcore 3: 172026 00:05:30.333 done. 00:05:30.333 00:05:30.333 real 0m1.231s 00:05:30.333 user 0m4.147s 00:05:30.333 sys 0m0.081s 00:05:30.333 10:30:54 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:30.333 10:30:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.333 ************************************ 00:05:30.333 END TEST event_perf 00:05:30.333 ************************************ 00:05:30.333 10:30:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.333 10:30:54 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:30.333 10:30:54 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:30.333 10:30:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.333 ************************************ 00:05:30.333 START TEST event_reactor 00:05:30.333 ************************************ 00:05:30.333 10:30:54 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.334 [2024-06-10 10:30:54.384194] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:30.334 [2024-06-10 10:30:54.384304] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620344 ] 00:05:30.334 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.334 [2024-06-10 10:30:54.451274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.334 [2024-06-10 10:30:54.521132] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.718 test_start 00:05:31.718 oneshot 00:05:31.718 tick 100 00:05:31.718 tick 100 00:05:31.718 tick 250 00:05:31.718 tick 100 00:05:31.718 tick 100 00:05:31.718 tick 250 00:05:31.718 tick 100 00:05:31.718 tick 500 00:05:31.718 tick 100 00:05:31.718 tick 100 00:05:31.718 tick 250 00:05:31.718 tick 100 00:05:31.718 tick 100 00:05:31.718 test_end 00:05:31.718 00:05:31.718 real 0m1.211s 00:05:31.718 user 0m1.131s 00:05:31.718 sys 0m0.076s 00:05:31.718 10:30:55 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:31.718 10:30:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:31.718 ************************************ 00:05:31.718 END TEST event_reactor 00:05:31.718 ************************************ 00:05:31.718 10:30:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.718 10:30:55 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:31.718 10:30:55 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:31.718 10:30:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.718 ************************************ 00:05:31.718 START TEST event_reactor_perf 00:05:31.718 ************************************ 00:05:31.718 10:30:55 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.718 [2024-06-10 10:30:55.670664] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:31.718 [2024-06-10 10:30:55.670740] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620536 ] 00:05:31.718 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.718 [2024-06-10 10:30:55.737594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.718 [2024-06-10 10:30:55.806445] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.660 test_start 00:05:32.660 test_end 00:05:32.660 Performance: 367073 events per second 00:05:32.660 00:05:32.660 real 0m1.211s 00:05:32.660 user 0m1.142s 00:05:32.660 sys 0m0.065s 00:05:32.660 10:30:56 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:32.660 10:30:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.660 ************************************ 00:05:32.660 END TEST event_reactor_perf 00:05:32.660 ************************************ 00:05:32.660 10:30:56 event -- event/event.sh@49 -- # uname -s 00:05:32.660 10:30:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.660 10:30:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.660 10:30:56 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:32.660 10:30:56 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:32.660 10:30:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.660 ************************************ 00:05:32.660 START TEST event_scheduler 00:05:32.660 ************************************ 00:05:32.660 10:30:56 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.921 * Looking for test storage... 00:05:32.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:32.921 10:30:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.921 10:30:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=620779 00:05:32.921 10:30:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.921 10:30:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.921 10:30:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 620779 00:05:32.921 10:30:57 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 620779 ']' 00:05:32.921 10:30:57 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.921 10:30:57 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:32.921 10:30:57 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.921 10:30:57 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:32.921 10:30:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.921 [2024-06-10 10:30:57.086767] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:32.921 [2024-06-10 10:30:57.086834] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620779 ] 00:05:32.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.921 [2024-06-10 10:30:57.142209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.182 [2024-06-10 10:30:57.209116] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.182 [2024-06-10 10:30:57.209146] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.182 [2024-06-10 10:30:57.209284] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.182 [2024-06-10 10:30:57.209286] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:33.751 10:30:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.751 POWER: Env isn't set yet! 00:05:33.751 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:33.751 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.751 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.751 POWER: Attempting to initialise PSTAT power management... 00:05:33.751 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:33.751 POWER: Initialized successfully for lcore 0 power management 00:05:33.751 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:33.751 POWER: Initialized successfully for lcore 1 power management 00:05:33.751 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:33.751 POWER: Initialized successfully for lcore 2 power management 00:05:33.751 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:33.751 POWER: Initialized successfully for lcore 3 power management 00:05:33.751 [2024-06-10 10:30:57.921451] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:33.751 [2024-06-10 10:30:57.921464] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:33.751 [2024-06-10 10:30:57.921469] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.751 10:30:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.751 [2024-06-10 10:30:57.978358] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.751 10:30:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:33.751 10:30:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.751 ************************************ 00:05:33.751 START TEST scheduler_create_thread 00:05:33.751 ************************************ 00:05:33.751 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:33.751 10:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.751 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.751 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.751 2 00:05:33.751 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:33.751 10:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.751 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:33.751 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.011 3 00:05:34.011 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.011 10:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:34.011 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.011 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.011 4 00:05:34.011 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.011 10:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:34.011 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.011 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.011 5 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.012 6 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.012 7 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.012 8 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.012 9 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:34.012 10:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.394 10 00:05:35.394 10:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.394 10:30:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:35.394 10:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.394 10:30:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.965 10:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.965 10:31:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:35.965 10:31:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:35.965 10:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.965 10:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.904 10:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.904 10:31:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:36.904 10:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.904 10:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.475 10:31:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:37.475 10:31:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.475 10:31:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.475 10:31:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:37.475 10:31:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.054 10:31:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:38.054 00:05:38.054 real 0m4.216s 00:05:38.054 user 0m0.024s 00:05:38.054 sys 0m0.007s 00:05:38.054 10:31:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:38.054 10:31:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.054 ************************************ 00:05:38.054 END TEST scheduler_create_thread 00:05:38.054 ************************************ 00:05:38.054 10:31:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.054 10:31:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 620779 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 620779 ']' 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 620779 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 620779 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 620779' 00:05:38.054 killing process with pid 620779 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 620779 00:05:38.054 10:31:02 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 620779 00:05:38.314 [2024-06-10 10:31:02.510394] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.574 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:38.574 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:38.574 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:38.574 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:38.574 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:38.574 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:38.574 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:38.574 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:38.574 00:05:38.575 real 0m5.755s 00:05:38.575 user 0m13.362s 00:05:38.575 sys 0m0.354s 00:05:38.575 10:31:02 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:38.575 10:31:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.575 ************************************ 00:05:38.575 END TEST event_scheduler 00:05:38.575 ************************************ 00:05:38.575 10:31:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.575 10:31:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.575 10:31:02 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:38.575 10:31:02 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:38.575 10:31:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.575 ************************************ 00:05:38.575 START TEST app_repeat 00:05:38.575 ************************************ 00:05:38.575 10:31:02 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=622147 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 622147' 00:05:38.575 Process app_repeat pid: 622147 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.575 spdk_app_start Round 0 00:05:38.575 10:31:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 622147 /var/tmp/spdk-nbd.sock 00:05:38.575 10:31:02 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 622147 ']' 00:05:38.575 10:31:02 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.575 10:31:02 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:38.575 10:31:02 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.575 10:31:02 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:38.575 10:31:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.575 [2024-06-10 10:31:02.809067] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:38.575 [2024-06-10 10:31:02.809131] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622147 ] 00:05:38.575 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.834 [2024-06-10 10:31:02.870983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.834 [2024-06-10 10:31:02.938028] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.834 [2024-06-10 10:31:02.938032] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.402 10:31:03 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:39.402 10:31:03 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:39.402 10:31:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.662 Malloc0 00:05:39.662 10:31:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.662 Malloc1 00:05:39.662 10:31:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.662 10:31:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.922 10:31:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.922 /dev/nbd0 00:05:39.922 10:31:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.922 10:31:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.922 1+0 records in 00:05:39.922 1+0 records out 00:05:39.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240316 s, 17.0 MB/s 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:39.922 10:31:04 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.923 10:31:04 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:39.923 10:31:04 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:39.923 10:31:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.923 10:31:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.923 10:31:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.183 /dev/nbd1 00:05:40.183 10:31:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.183 10:31:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.183 10:31:04 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:40.183 10:31:04 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:40.183 10:31:04 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.183 10:31:04 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.183 10:31:04 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:40.183 10:31:04 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:40.183 10:31:04 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:40.183 10:31:04 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:40.183 10:31:04 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.184 1+0 records in 00:05:40.184 1+0 records out 00:05:40.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271325 s, 15.1 MB/s 00:05:40.184 10:31:04 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.184 10:31:04 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:40.184 10:31:04 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.184 10:31:04 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:40.184 10:31:04 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:40.184 10:31:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.184 10:31:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.184 10:31:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.184 10:31:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.184 10:31:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.444 { 00:05:40.444 "nbd_device": "/dev/nbd0", 00:05:40.444 "bdev_name": "Malloc0" 00:05:40.444 }, 00:05:40.444 { 00:05:40.444 "nbd_device": "/dev/nbd1", 00:05:40.444 "bdev_name": "Malloc1" 00:05:40.444 } 00:05:40.444 ]' 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.444 { 00:05:40.444 "nbd_device": "/dev/nbd0", 00:05:40.444 "bdev_name": "Malloc0" 00:05:40.444 }, 00:05:40.444 { 00:05:40.444 "nbd_device": "/dev/nbd1", 00:05:40.444 "bdev_name": "Malloc1" 00:05:40.444 } 00:05:40.444 ]' 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.444 /dev/nbd1' 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.444 /dev/nbd1' 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.444 256+0 records in 00:05:40.444 256+0 records out 00:05:40.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124663 s, 84.1 MB/s 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.444 256+0 records in 00:05:40.444 256+0 records out 00:05:40.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172633 s, 60.7 MB/s 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.444 256+0 records in 00:05:40.444 256+0 records out 00:05:40.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016811 s, 62.4 MB/s 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.444 10:31:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.445 10:31:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.445 10:31:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.445 10:31:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.445 10:31:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.445 10:31:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.445 10:31:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.445 10:31:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.445 10:31:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.445 10:31:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.745 10:31:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.005 10:31:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.005 10:31:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.265 10:31:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.265 [2024-06-10 10:31:05.463811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.265 [2024-06-10 10:31:05.527807] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.265 [2024-06-10 10:31:05.527810] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.525 [2024-06-10 10:31:05.559848] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.525 [2024-06-10 10:31:05.559883] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.067 10:31:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.067 10:31:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:44.067 spdk_app_start Round 1 00:05:44.067 10:31:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 622147 /var/tmp/spdk-nbd.sock 00:05:44.067 10:31:08 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 622147 ']' 00:05:44.067 10:31:08 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.067 10:31:08 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:44.067 10:31:08 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.067 10:31:08 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:44.067 10:31:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.327 10:31:08 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:44.327 10:31:08 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:44.328 10:31:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.588 Malloc0 00:05:44.588 10:31:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.588 Malloc1 00:05:44.588 10:31:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.588 10:31:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.849 /dev/nbd0 00:05:44.849 10:31:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.849 10:31:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.849 1+0 records in 00:05:44.849 1+0 records out 00:05:44.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021172 s, 19.3 MB/s 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:44.849 10:31:08 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.849 10:31:09 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:44.849 10:31:09 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:44.849 10:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.849 10:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.849 10:31:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.138 /dev/nbd1 00:05:45.138 10:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.138 10:31:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.138 1+0 records in 00:05:45.138 1+0 records out 00:05:45.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274227 s, 14.9 MB/s 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:45.138 10:31:09 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:45.138 10:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.138 10:31:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.138 10:31:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.138 10:31:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.138 10:31:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.138 10:31:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.138 { 00:05:45.138 "nbd_device": "/dev/nbd0", 00:05:45.138 "bdev_name": "Malloc0" 00:05:45.138 }, 00:05:45.138 { 00:05:45.138 "nbd_device": "/dev/nbd1", 00:05:45.138 "bdev_name": "Malloc1" 00:05:45.138 } 00:05:45.138 ]' 00:05:45.139 10:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.139 { 00:05:45.139 "nbd_device": "/dev/nbd0", 00:05:45.139 "bdev_name": "Malloc0" 00:05:45.139 }, 00:05:45.139 { 00:05:45.139 "nbd_device": "/dev/nbd1", 00:05:45.139 "bdev_name": "Malloc1" 00:05:45.139 } 00:05:45.139 ]' 00:05:45.139 10:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.139 10:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.139 /dev/nbd1' 00:05:45.139 10:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.139 /dev/nbd1' 00:05:45.139 10:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.139 10:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.139 10:31:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.139 10:31:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.422 256+0 records in 00:05:45.422 256+0 records out 00:05:45.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124061 s, 84.5 MB/s 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.422 256+0 records in 00:05:45.422 256+0 records out 00:05:45.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188718 s, 55.6 MB/s 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.422 256+0 records in 00:05:45.422 256+0 records out 00:05:45.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167403 s, 62.6 MB/s 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.422 10:31:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.423 10:31:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.423 10:31:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.423 10:31:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.423 10:31:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.423 10:31:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.423 10:31:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.423 10:31:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.423 10:31:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.683 10:31:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.942 10:31:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.943 10:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.943 10:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.943 10:31:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.943 10:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.943 10:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.943 10:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.943 10:31:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.943 10:31:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.943 10:31:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.943 10:31:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.943 10:31:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.943 10:31:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.943 10:31:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.203 [2024-06-10 10:31:10.334692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.203 [2024-06-10 10:31:10.398303] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.203 [2024-06-10 10:31:10.398319] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.203 [2024-06-10 10:31:10.431044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.203 [2024-06-10 10:31:10.431080] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.501 10:31:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.501 10:31:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:49.501 spdk_app_start Round 2 00:05:49.501 10:31:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 622147 /var/tmp/spdk-nbd.sock 00:05:49.501 10:31:13 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 622147 ']' 00:05:49.501 10:31:13 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.501 10:31:13 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:49.501 10:31:13 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.501 10:31:13 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:49.501 10:31:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.501 10:31:13 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:49.501 10:31:13 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:49.501 10:31:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.501 Malloc0 00:05:49.501 10:31:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.501 Malloc1 00:05:49.501 10:31:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.501 10:31:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.762 /dev/nbd0 00:05:49.762 10:31:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.762 10:31:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.762 1+0 records in 00:05:49.762 1+0 records out 00:05:49.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002456 s, 16.7 MB/s 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:49.762 10:31:13 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:49.762 10:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.762 10:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.762 10:31:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.762 /dev/nbd1 00:05:49.762 10:31:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.762 10:31:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.762 10:31:14 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:49.762 10:31:14 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:49.762 10:31:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:49.762 10:31:14 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.763 1+0 records in 00:05:49.763 1+0 records out 00:05:49.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249489 s, 16.4 MB/s 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:49.763 10:31:14 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:49.763 10:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.763 10:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.763 10:31:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.763 10:31:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.763 10:31:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.023 { 00:05:50.023 "nbd_device": "/dev/nbd0", 00:05:50.023 "bdev_name": "Malloc0" 00:05:50.023 }, 00:05:50.023 { 00:05:50.023 "nbd_device": "/dev/nbd1", 00:05:50.023 "bdev_name": "Malloc1" 00:05:50.023 } 00:05:50.023 ]' 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.023 { 00:05:50.023 "nbd_device": "/dev/nbd0", 00:05:50.023 "bdev_name": "Malloc0" 00:05:50.023 }, 00:05:50.023 { 00:05:50.023 "nbd_device": "/dev/nbd1", 00:05:50.023 "bdev_name": "Malloc1" 00:05:50.023 } 00:05:50.023 ]' 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.023 /dev/nbd1' 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.023 /dev/nbd1' 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:50.023 256+0 records in 00:05:50.023 256+0 records out 00:05:50.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121194 s, 86.5 MB/s 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.023 256+0 records in 00:05:50.023 256+0 records out 00:05:50.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159574 s, 65.7 MB/s 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.023 10:31:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.284 256+0 records in 00:05:50.284 256+0 records out 00:05:50.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172353 s, 60.8 MB/s 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.284 10:31:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.544 10:31:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.804 10:31:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.804 10:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.804 10:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.804 10:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.804 10:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.804 10:31:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.804 10:31:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.804 10:31:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.804 10:31:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.804 10:31:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.804 10:31:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.064 [2024-06-10 10:31:15.154216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.064 [2024-06-10 10:31:15.217665] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.064 [2024-06-10 10:31:15.217668] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.065 [2024-06-10 10:31:15.249792] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.065 [2024-06-10 10:31:15.249825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.458 10:31:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 622147 /var/tmp/spdk-nbd.sock 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 622147 ']' 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:54.458 10:31:18 event.app_repeat -- event/event.sh@39 -- # killprocess 622147 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 622147 ']' 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 622147 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 622147 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 622147' 00:05:54.458 killing process with pid 622147 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@968 -- # kill 622147 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@973 -- # wait 622147 00:05:54.458 spdk_app_start is called in Round 0. 00:05:54.458 Shutdown signal received, stop current app iteration 00:05:54.458 Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 reinitialization... 00:05:54.458 spdk_app_start is called in Round 1. 00:05:54.458 Shutdown signal received, stop current app iteration 00:05:54.458 Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 reinitialization... 00:05:54.458 spdk_app_start is called in Round 2. 00:05:54.458 Shutdown signal received, stop current app iteration 00:05:54.458 Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 reinitialization... 00:05:54.458 spdk_app_start is called in Round 3. 00:05:54.458 Shutdown signal received, stop current app iteration 00:05:54.458 10:31:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.458 10:31:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:54.458 00:05:54.458 real 0m15.572s 00:05:54.458 user 0m33.672s 00:05:54.458 sys 0m2.091s 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:54.458 10:31:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.458 ************************************ 00:05:54.458 END TEST app_repeat 00:05:54.458 ************************************ 00:05:54.458 10:31:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.458 10:31:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.458 10:31:18 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:54.458 10:31:18 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.458 10:31:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.458 ************************************ 00:05:54.458 START TEST cpu_locks 00:05:54.458 ************************************ 00:05:54.458 10:31:18 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.458 * Looking for test storage... 00:05:54.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:54.458 10:31:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.458 10:31:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.458 10:31:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.458 10:31:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.458 10:31:18 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:54.458 10:31:18 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.458 10:31:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.458 ************************************ 00:05:54.458 START TEST default_locks 00:05:54.458 ************************************ 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=625403 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 625403 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 625403 ']' 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:54.458 10:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.458 [2024-06-10 10:31:18.620456] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:54.458 [2024-06-10 10:31:18.620520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625403 ] 00:05:54.458 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.458 [2024-06-10 10:31:18.685050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.719 [2024-06-10 10:31:18.760186] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.291 10:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:55.291 10:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:05:55.291 10:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 625403 00:05:55.291 10:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 625403 00:05:55.292 10:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.862 lslocks: write error 00:05:55.862 10:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 625403 00:05:55.862 10:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 625403 ']' 00:05:55.862 10:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 625403 00:05:55.862 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:05:55.862 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:55.862 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 625403 00:05:55.862 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:55.862 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:55.862 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 625403' 00:05:55.862 killing process with pid 625403 00:05:55.862 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 625403 00:05:55.862 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 625403 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 625403 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 625403 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 625403 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 625403 ']' 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (625403) - No such process 00:05:56.123 ERROR: process (pid: 625403) is no longer running 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.123 00:05:56.123 real 0m1.716s 00:05:56.123 user 0m1.820s 00:05:56.123 sys 0m0.542s 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.123 10:31:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.123 ************************************ 00:05:56.123 END TEST default_locks 00:05:56.123 ************************************ 00:05:56.123 10:31:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.123 10:31:20 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.123 10:31:20 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.123 10:31:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.123 ************************************ 00:05:56.123 START TEST default_locks_via_rpc 00:05:56.123 ************************************ 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=625773 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 625773 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 625773 ']' 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:56.123 10:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.123 [2024-06-10 10:31:20.407153] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:56.123 [2024-06-10 10:31:20.407205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625773 ] 00:05:56.384 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.384 [2024-06-10 10:31:20.468910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.384 [2024-06-10 10:31:20.540622] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.955 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 625773 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 625773 00:05:56.956 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 625773 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 625773 ']' 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 625773 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 625773 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 625773' 00:05:57.525 killing process with pid 625773 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 625773 00:05:57.525 10:31:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 625773 00:05:57.788 00:05:57.788 real 0m1.650s 00:05:57.788 user 0m1.764s 00:05:57.788 sys 0m0.519s 00:05:57.788 10:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.788 10:31:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.788 ************************************ 00:05:57.788 END TEST default_locks_via_rpc 00:05:57.788 ************************************ 00:05:57.788 10:31:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:57.788 10:31:22 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:57.788 10:31:22 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.788 10:31:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.048 ************************************ 00:05:58.048 START TEST non_locking_app_on_locked_coremask 00:05:58.048 ************************************ 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=626136 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 626136 /var/tmp/spdk.sock 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 626136 ']' 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:58.048 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.048 [2024-06-10 10:31:22.133942] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:58.048 [2024-06-10 10:31:22.133999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626136 ] 00:05:58.048 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.048 [2024-06-10 10:31:22.195337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.048 [2024-06-10 10:31:22.266926] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=626455 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 626455 /var/tmp/spdk2.sock 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 626455 ']' 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:58.619 10:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.879 [2024-06-10 10:31:22.926503] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:05:58.880 [2024-06-10 10:31:22.926555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626455 ] 00:05:58.880 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.880 [2024-06-10 10:31:23.015075] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.880 [2024-06-10 10:31:23.015103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.880 [2024-06-10 10:31:23.149380] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.450 10:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:59.450 10:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:59.450 10:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 626136 00:05:59.450 10:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 626136 00:05:59.450 10:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.046 lslocks: write error 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 626136 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 626136 ']' 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 626136 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 626136 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 626136' 00:06:00.046 killing process with pid 626136 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 626136 00:06:00.046 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 626136 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 626455 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 626455 ']' 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 626455 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 626455 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 626455' 00:06:00.617 killing process with pid 626455 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 626455 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 626455 00:06:00.617 00:06:00.617 real 0m2.818s 00:06:00.617 user 0m3.075s 00:06:00.617 sys 0m0.803s 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:00.617 10:31:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.617 ************************************ 00:06:00.617 END TEST non_locking_app_on_locked_coremask 00:06:00.617 ************************************ 00:06:00.878 10:31:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:00.878 10:31:24 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:00.878 10:31:24 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:00.878 10:31:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 ************************************ 00:06:00.878 START TEST locking_app_on_unlocked_coremask 00:06:00.878 ************************************ 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=626843 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 626843 /var/tmp/spdk.sock 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 626843 ']' 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:00.878 10:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 [2024-06-10 10:31:25.031650] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:00.878 [2024-06-10 10:31:25.031705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626843 ] 00:06:00.878 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.878 [2024-06-10 10:31:25.094258] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.878 [2024-06-10 10:31:25.094290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.139 [2024-06-10 10:31:25.166266] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=626882 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 626882 /var/tmp/spdk2.sock 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 626882 ']' 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:01.712 10:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.712 [2024-06-10 10:31:25.814873] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:01.712 [2024-06-10 10:31:25.814922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626882 ] 00:06:01.712 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.712 [2024-06-10 10:31:25.904049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.973 [2024-06-10 10:31:26.037822] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.544 10:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:02.544 10:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:02.544 10:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 626882 00:06:02.544 10:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 626882 00:06:02.544 10:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.115 lslocks: write error 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 626843 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 626843 ']' 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 626843 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 626843 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 626843' 00:06:03.115 killing process with pid 626843 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 626843 00:06:03.115 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 626843 00:06:03.686 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 626882 00:06:03.686 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 626882 ']' 00:06:03.686 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 626882 00:06:03.686 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:03.686 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:03.686 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 626882 00:06:03.686 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:03.686 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:03.686 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 626882' 00:06:03.686 killing process with pid 626882 00:06:03.687 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 626882 00:06:03.687 10:31:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 626882 00:06:03.947 00:06:03.947 real 0m3.034s 00:06:03.947 user 0m3.266s 00:06:03.947 sys 0m0.897s 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.947 ************************************ 00:06:03.947 END TEST locking_app_on_unlocked_coremask 00:06:03.947 ************************************ 00:06:03.947 10:31:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.947 10:31:28 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:03.947 10:31:28 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.947 10:31:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.947 ************************************ 00:06:03.947 START TEST locking_app_on_locked_coremask 00:06:03.947 ************************************ 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=627550 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 627550 /var/tmp/spdk.sock 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 627550 ']' 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.947 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.948 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.948 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.948 [2024-06-10 10:31:28.135662] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:03.948 [2024-06-10 10:31:28.135715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627550 ] 00:06:03.948 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.948 [2024-06-10 10:31:28.198178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.208 [2024-06-10 10:31:28.273217] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.778 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:04.778 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:04.778 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=627568 00:06:04.778 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 627568 /var/tmp/spdk2.sock 00:06:04.778 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:04.778 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.778 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 627568 /var/tmp/spdk2.sock 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 627568 /var/tmp/spdk2.sock 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 627568 ']' 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:04.779 10:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 [2024-06-10 10:31:28.949070] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:04.779 [2024-06-10 10:31:28.949121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627568 ] 00:06:04.779 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.779 [2024-06-10 10:31:29.038957] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 627550 has claimed it. 00:06:04.779 [2024-06-10 10:31:29.038997] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (627568) - No such process 00:06:05.350 ERROR: process (pid: 627568) is no longer running 00:06:05.350 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:05.350 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:05.350 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:05.350 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.350 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:05.350 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.350 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 627550 00:06:05.350 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 627550 00:06:05.350 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.921 lslocks: write error 00:06:05.921 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 627550 00:06:05.921 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 627550 ']' 00:06:05.921 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 627550 00:06:05.921 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:05.921 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:05.921 10:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 627550 00:06:05.921 10:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:05.921 10:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:05.921 10:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 627550' 00:06:05.921 killing process with pid 627550 00:06:05.921 10:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 627550 00:06:05.921 10:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 627550 00:06:06.182 00:06:06.182 real 0m2.150s 00:06:06.182 user 0m2.379s 00:06:06.182 sys 0m0.584s 00:06:06.182 10:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:06.182 10:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.182 ************************************ 00:06:06.182 END TEST locking_app_on_locked_coremask 00:06:06.182 ************************************ 00:06:06.182 10:31:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.182 10:31:30 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:06.182 10:31:30 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:06.182 10:31:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.182 ************************************ 00:06:06.182 START TEST locking_overlapped_coremask 00:06:06.182 ************************************ 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=627932 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 627932 /var/tmp/spdk.sock 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 627932 ']' 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:06.182 10:31:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.182 [2024-06-10 10:31:30.369894] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:06.182 [2024-06-10 10:31:30.369963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627932 ] 00:06:06.182 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.182 [2024-06-10 10:31:30.434521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.443 [2024-06-10 10:31:30.511276] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.443 [2024-06-10 10:31:30.511346] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.443 [2024-06-10 10:31:30.511349] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=628112 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 628112 /var/tmp/spdk2.sock 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 628112 /var/tmp/spdk2.sock 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 628112 /var/tmp/spdk2.sock 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 628112 ']' 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:07.015 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.015 [2024-06-10 10:31:31.189728] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:07.015 [2024-06-10 10:31:31.189781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628112 ] 00:06:07.015 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.015 [2024-06-10 10:31:31.260384] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 627932 has claimed it. 00:06:07.015 [2024-06-10 10:31:31.260416] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (628112) - No such process 00:06:07.587 ERROR: process (pid: 628112) is no longer running 00:06:07.587 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:07.587 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:07.587 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:07.587 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:07.587 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:07.587 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 627932 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 627932 ']' 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 627932 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 627932 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 627932' 00:06:07.588 killing process with pid 627932 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 627932 00:06:07.588 10:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 627932 00:06:07.849 00:06:07.849 real 0m1.756s 00:06:07.849 user 0m4.933s 00:06:07.849 sys 0m0.370s 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.849 ************************************ 00:06:07.849 END TEST locking_overlapped_coremask 00:06:07.849 ************************************ 00:06:07.849 10:31:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.849 10:31:32 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:07.849 10:31:32 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:07.849 10:31:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.849 ************************************ 00:06:07.849 START TEST locking_overlapped_coremask_via_rpc 00:06:07.849 ************************************ 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=628303 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 628303 /var/tmp/spdk.sock 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 628303 ']' 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:07.849 10:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.109 [2024-06-10 10:31:32.188197] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:08.109 [2024-06-10 10:31:32.188247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628303 ] 00:06:08.109 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.109 [2024-06-10 10:31:32.265343] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.109 [2024-06-10 10:31:32.265378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.109 [2024-06-10 10:31:32.341044] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.109 [2024-06-10 10:31:32.341165] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.109 [2024-06-10 10:31:32.341168] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=628582 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 628582 /var/tmp/spdk2.sock 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 628582 ']' 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:09.049 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.049 [2024-06-10 10:31:33.066926] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:09.049 [2024-06-10 10:31:33.066980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628582 ] 00:06:09.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.049 [2024-06-10 10:31:33.139081] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.049 [2024-06-10 10:31:33.139107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.049 [2024-06-10 10:31:33.248764] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.049 [2024-06-10 10:31:33.248921] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.049 [2024-06-10 10:31:33.248924] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.619 [2024-06-10 10:31:33.844309] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 628303 has claimed it. 00:06:09.619 request: 00:06:09.619 { 00:06:09.619 "method": "framework_enable_cpumask_locks", 00:06:09.619 "req_id": 1 00:06:09.619 } 00:06:09.619 Got JSON-RPC error response 00:06:09.619 response: 00:06:09.619 { 00:06:09.619 "code": -32603, 00:06:09.619 "message": "Failed to claim CPU core: 2" 00:06:09.619 } 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 628303 /var/tmp/spdk.sock 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 628303 ']' 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:09.619 10:31:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.879 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:09.879 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:09.879 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 628582 /var/tmp/spdk2.sock 00:06:09.879 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 628582 ']' 00:06:09.879 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.879 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:09.879 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.879 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:09.879 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.140 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:10.140 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:10.140 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.140 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.140 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.140 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.140 00:06:10.140 real 0m2.053s 00:06:10.140 user 0m0.824s 00:06:10.140 sys 0m0.159s 00:06:10.140 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:10.140 10:31:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.140 ************************************ 00:06:10.140 END TEST locking_overlapped_coremask_via_rpc 00:06:10.140 ************************************ 00:06:10.140 10:31:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.140 10:31:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 628303 ]] 00:06:10.140 10:31:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 628303 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 628303 ']' 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 628303 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 628303 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 628303' 00:06:10.140 killing process with pid 628303 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 628303 00:06:10.140 10:31:34 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 628303 00:06:10.401 10:31:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 628582 ]] 00:06:10.401 10:31:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 628582 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 628582 ']' 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 628582 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 628582 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 628582' 00:06:10.401 killing process with pid 628582 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 628582 00:06:10.401 10:31:34 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 628582 00:06:10.661 10:31:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.661 10:31:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:10.661 10:31:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 628303 ]] 00:06:10.661 10:31:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 628303 00:06:10.661 10:31:34 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 628303 ']' 00:06:10.661 10:31:34 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 628303 00:06:10.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (628303) - No such process 00:06:10.661 10:31:34 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 628303 is not found' 00:06:10.661 Process with pid 628303 is not found 00:06:10.661 10:31:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 628582 ]] 00:06:10.661 10:31:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 628582 00:06:10.661 10:31:34 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 628582 ']' 00:06:10.661 10:31:34 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 628582 00:06:10.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (628582) - No such process 00:06:10.661 10:31:34 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 628582 is not found' 00:06:10.661 Process with pid 628582 is not found 00:06:10.661 10:31:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.661 00:06:10.661 real 0m16.329s 00:06:10.661 user 0m27.760s 00:06:10.661 sys 0m4.714s 00:06:10.661 10:31:34 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:10.661 10:31:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.661 ************************************ 00:06:10.661 END TEST cpu_locks 00:06:10.661 ************************************ 00:06:10.661 00:06:10.661 real 0m41.871s 00:06:10.661 user 1m21.424s 00:06:10.661 sys 0m7.763s 00:06:10.661 10:31:34 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:10.661 10:31:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.661 ************************************ 00:06:10.661 END TEST event 00:06:10.661 ************************************ 00:06:10.661 10:31:34 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.661 10:31:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:10.661 10:31:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:10.661 10:31:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.661 ************************************ 00:06:10.661 START TEST thread 00:06:10.661 ************************************ 00:06:10.661 10:31:34 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.921 * Looking for test storage... 00:06:10.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:10.921 10:31:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.921 10:31:34 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:10.921 10:31:34 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:10.921 10:31:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.921 ************************************ 00:06:10.921 START TEST thread_poller_perf 00:06:10.921 ************************************ 00:06:10.921 10:31:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.922 [2024-06-10 10:31:35.019660] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:10.922 [2024-06-10 10:31:35.019761] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629075 ] 00:06:10.922 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.922 [2024-06-10 10:31:35.090397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.922 [2024-06-10 10:31:35.164606] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.922 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:12.333 ====================================== 00:06:12.333 busy:2408181954 (cyc) 00:06:12.333 total_run_count: 287000 00:06:12.333 tsc_hz: 2400000000 (cyc) 00:06:12.333 ====================================== 00:06:12.333 poller_cost: 8390 (cyc), 3495 (nsec) 00:06:12.333 00:06:12.333 real 0m1.229s 00:06:12.333 user 0m1.143s 00:06:12.333 sys 0m0.081s 00:06:12.333 10:31:36 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:12.333 10:31:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.333 ************************************ 00:06:12.333 END TEST thread_poller_perf 00:06:12.333 ************************************ 00:06:12.333 10:31:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.333 10:31:36 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:12.333 10:31:36 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.333 10:31:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.333 ************************************ 00:06:12.333 START TEST thread_poller_perf 00:06:12.333 ************************************ 00:06:12.333 10:31:36 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.333 [2024-06-10 10:31:36.326834] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:12.333 [2024-06-10 10:31:36.326921] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629323 ] 00:06:12.333 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.333 [2024-06-10 10:31:36.393772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.333 [2024-06-10 10:31:36.463053] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.333 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.273 ====================================== 00:06:13.273 busy:2401856702 (cyc) 00:06:13.273 total_run_count: 3812000 00:06:13.273 tsc_hz: 2400000000 (cyc) 00:06:13.273 ====================================== 00:06:13.273 poller_cost: 630 (cyc), 262 (nsec) 00:06:13.273 00:06:13.273 real 0m1.212s 00:06:13.273 user 0m1.137s 00:06:13.273 sys 0m0.071s 00:06:13.273 10:31:37 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.273 10:31:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.273 ************************************ 00:06:13.273 END TEST thread_poller_perf 00:06:13.273 ************************************ 00:06:13.273 10:31:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:13.273 00:06:13.273 real 0m2.688s 00:06:13.273 user 0m2.377s 00:06:13.273 sys 0m0.318s 00:06:13.273 10:31:37 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.273 10:31:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.273 ************************************ 00:06:13.273 END TEST thread 00:06:13.273 ************************************ 00:06:13.535 10:31:37 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:13.535 10:31:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:13.535 10:31:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.535 10:31:37 -- common/autotest_common.sh@10 -- # set +x 00:06:13.535 ************************************ 00:06:13.535 START TEST accel 00:06:13.535 ************************************ 00:06:13.535 10:31:37 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:13.535 * Looking for test storage... 00:06:13.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:13.535 10:31:37 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:13.535 10:31:37 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:13.535 10:31:37 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.535 10:31:37 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=629588 00:06:13.535 10:31:37 accel -- accel/accel.sh@63 -- # waitforlisten 629588 00:06:13.535 10:31:37 accel -- common/autotest_common.sh@830 -- # '[' -z 629588 ']' 00:06:13.535 10:31:37 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.535 10:31:37 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:13.535 10:31:37 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.535 10:31:37 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:13.535 10:31:37 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:13.535 10:31:37 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:13.535 10:31:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.535 10:31:37 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.535 10:31:37 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.535 10:31:37 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.535 10:31:37 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.535 10:31:37 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.535 10:31:37 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:13.535 10:31:37 accel -- accel/accel.sh@41 -- # jq -r . 00:06:13.535 [2024-06-10 10:31:37.785056] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:13.536 [2024-06-10 10:31:37.785137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629588 ] 00:06:13.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.796 [2024-06-10 10:31:37.849998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.796 [2024-06-10 10:31:37.925934] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@863 -- # return 0 00:06:14.368 10:31:38 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:14.368 10:31:38 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:14.368 10:31:38 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:14.368 10:31:38 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:14.368 10:31:38 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:14.368 10:31:38 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:14.368 10:31:38 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.368 10:31:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.368 10:31:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.368 10:31:38 accel -- accel/accel.sh@75 -- # killprocess 629588 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@949 -- # '[' -z 629588 ']' 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@953 -- # kill -0 629588 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@954 -- # uname 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 629588 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 629588' 00:06:14.368 killing process with pid 629588 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@968 -- # kill 629588 00:06:14.368 10:31:38 accel -- common/autotest_common.sh@973 -- # wait 629588 00:06:14.629 10:31:38 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:14.629 10:31:38 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:14.629 10:31:38 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:14.629 10:31:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.629 10:31:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.629 10:31:38 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:14.629 10:31:38 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:14.629 10:31:38 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.629 10:31:38 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.629 10:31:38 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:14.629 10:31:38 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.629 10:31:38 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.629 10:31:38 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.629 10:31:38 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:14.629 10:31:38 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:14.889 10:31:38 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.889 10:31:38 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:14.889 10:31:38 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:14.889 10:31:38 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:14.889 10:31:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.889 10:31:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.889 ************************************ 00:06:14.889 START TEST accel_missing_filename 00:06:14.889 ************************************ 00:06:14.889 10:31:38 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:14.889 10:31:38 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:14.889 10:31:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:14.889 10:31:38 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:14.889 10:31:38 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.889 10:31:38 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:14.889 10:31:38 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.889 10:31:39 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:14.889 10:31:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:14.889 10:31:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:14.889 10:31:39 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.889 10:31:39 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.890 10:31:39 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.890 10:31:39 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.890 10:31:39 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.890 10:31:39 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:14.890 10:31:39 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:14.890 [2024-06-10 10:31:39.027983] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:14.890 [2024-06-10 10:31:39.028084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629871 ] 00:06:14.890 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.890 [2024-06-10 10:31:39.098849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.890 [2024-06-10 10:31:39.165178] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.150 [2024-06-10 10:31:39.197138] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.150 [2024-06-10 10:31:39.234450] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:15.150 A filename is required. 00:06:15.150 10:31:39 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:15.150 10:31:39 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.150 10:31:39 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:15.150 10:31:39 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:15.150 10:31:39 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:15.150 10:31:39 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.150 00:06:15.150 real 0m0.290s 00:06:15.150 user 0m0.218s 00:06:15.150 sys 0m0.113s 00:06:15.150 10:31:39 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.150 10:31:39 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:15.150 ************************************ 00:06:15.150 END TEST accel_missing_filename 00:06:15.150 ************************************ 00:06:15.150 10:31:39 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.150 10:31:39 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:15.150 10:31:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.150 10:31:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.150 ************************************ 00:06:15.150 START TEST accel_compress_verify 00:06:15.150 ************************************ 00:06:15.150 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.150 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:15.150 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.150 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:15.150 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.150 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:15.150 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.150 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.150 10:31:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.150 10:31:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:15.150 10:31:39 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.150 10:31:39 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.150 10:31:39 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.150 10:31:39 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.150 10:31:39 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.150 10:31:39 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:15.150 10:31:39 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:15.150 [2024-06-10 10:31:39.392566] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:15.150 [2024-06-10 10:31:39.392654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629936 ] 00:06:15.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.411 [2024-06-10 10:31:39.457317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.411 [2024-06-10 10:31:39.525198] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.411 [2024-06-10 10:31:39.557232] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.411 [2024-06-10 10:31:39.594528] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:15.411 00:06:15.411 Compression does not support the verify option, aborting. 00:06:15.411 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:15.411 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.411 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:15.411 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:15.411 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:15.411 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.411 00:06:15.411 real 0m0.287s 00:06:15.411 user 0m0.217s 00:06:15.411 sys 0m0.112s 00:06:15.411 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.411 10:31:39 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:15.411 ************************************ 00:06:15.411 END TEST accel_compress_verify 00:06:15.411 ************************************ 00:06:15.411 10:31:39 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:15.411 10:31:39 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:15.411 10:31:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.411 10:31:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.672 ************************************ 00:06:15.672 START TEST accel_wrong_workload 00:06:15.672 ************************************ 00:06:15.672 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:15.672 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:15.672 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:15.672 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:15.672 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.672 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:15.672 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.673 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:15.673 10:31:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:15.673 10:31:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:15.673 10:31:39 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.673 10:31:39 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.673 10:31:39 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.673 10:31:39 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.673 10:31:39 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.673 10:31:39 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:15.673 10:31:39 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:15.673 Unsupported workload type: foobar 00:06:15.673 [2024-06-10 10:31:39.754271] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:15.673 accel_perf options: 00:06:15.673 [-h help message] 00:06:15.673 [-q queue depth per core] 00:06:15.673 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.673 [-T number of threads per core 00:06:15.673 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.673 [-t time in seconds] 00:06:15.673 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.673 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:15.673 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.673 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.673 [-S for crc32c workload, use this seed value (default 0) 00:06:15.673 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.673 [-f for fill workload, use this BYTE value (default 255) 00:06:15.673 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.673 [-y verify result if this switch is on] 00:06:15.673 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.673 Can be used to spread operations across a wider range of memory. 00:06:15.673 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:15.673 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.673 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:15.673 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.673 00:06:15.673 real 0m0.037s 00:06:15.673 user 0m0.022s 00:06:15.673 sys 0m0.014s 00:06:15.673 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.673 10:31:39 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:15.673 ************************************ 00:06:15.673 END TEST accel_wrong_workload 00:06:15.673 ************************************ 00:06:15.673 Error: writing output failed: Broken pipe 00:06:15.673 10:31:39 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.673 10:31:39 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:15.673 10:31:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.673 10:31:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.673 ************************************ 00:06:15.673 START TEST accel_negative_buffers 00:06:15.673 ************************************ 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:15.673 10:31:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:15.673 10:31:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:15.673 10:31:39 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.673 10:31:39 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.673 10:31:39 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.673 10:31:39 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.673 10:31:39 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.673 10:31:39 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:15.673 10:31:39 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:15.673 -x option must be non-negative. 00:06:15.673 [2024-06-10 10:31:39.865708] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:15.673 accel_perf options: 00:06:15.673 [-h help message] 00:06:15.673 [-q queue depth per core] 00:06:15.673 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:15.673 [-T number of threads per core 00:06:15.673 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:15.673 [-t time in seconds] 00:06:15.673 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:15.673 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:15.673 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:15.673 [-l for compress/decompress workloads, name of uncompressed input file 00:06:15.673 [-S for crc32c workload, use this seed value (default 0) 00:06:15.673 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:15.673 [-f for fill workload, use this BYTE value (default 255) 00:06:15.673 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:15.673 [-y verify result if this switch is on] 00:06:15.673 [-a tasks to allocate per core (default: same value as -q)] 00:06:15.673 Can be used to spread operations across a wider range of memory. 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.673 00:06:15.673 real 0m0.038s 00:06:15.673 user 0m0.022s 00:06:15.673 sys 0m0.016s 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.673 10:31:39 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:15.673 ************************************ 00:06:15.673 END TEST accel_negative_buffers 00:06:15.673 ************************************ 00:06:15.673 Error: writing output failed: Broken pipe 00:06:15.673 10:31:39 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:15.673 10:31:39 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:15.673 10:31:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.673 10:31:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.673 ************************************ 00:06:15.673 START TEST accel_crc32c 00:06:15.673 ************************************ 00:06:15.673 10:31:39 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:15.673 10:31:39 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:15.935 [2024-06-10 10:31:39.979116] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:15.935 [2024-06-10 10:31:39.979201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630279 ] 00:06:15.935 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.935 [2024-06-10 10:31:40.047228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.935 [2024-06-10 10:31:40.123579] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:15.935 10:31:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:17.321 10:31:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.321 00:06:17.321 real 0m1.304s 00:06:17.321 user 0m1.205s 00:06:17.321 sys 0m0.111s 00:06:17.321 10:31:41 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.321 10:31:41 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:17.321 ************************************ 00:06:17.321 END TEST accel_crc32c 00:06:17.321 ************************************ 00:06:17.321 10:31:41 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:17.321 10:31:41 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:17.321 10:31:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.321 10:31:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.321 ************************************ 00:06:17.321 START TEST accel_crc32c_C2 00:06:17.321 ************************************ 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:17.321 [2024-06-10 10:31:41.352893] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:17.321 [2024-06-10 10:31:41.352955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630480 ] 00:06:17.321 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.321 [2024-06-10 10:31:41.415226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.321 [2024-06-10 10:31:41.481731] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.321 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:17.322 10:31:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.706 00:06:18.706 real 0m1.288s 00:06:18.706 user 0m1.202s 00:06:18.706 sys 0m0.098s 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.706 10:31:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:18.706 ************************************ 00:06:18.706 END TEST accel_crc32c_C2 00:06:18.706 ************************************ 00:06:18.706 10:31:42 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:18.706 10:31:42 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:18.706 10:31:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.706 10:31:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.706 ************************************ 00:06:18.706 START TEST accel_copy 00:06:18.706 ************************************ 00:06:18.706 10:31:42 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.706 10:31:42 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:18.707 [2024-06-10 10:31:42.716534] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:18.707 [2024-06-10 10:31:42.716623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630689 ] 00:06:18.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.707 [2024-06-10 10:31:42.779129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.707 [2024-06-10 10:31:42.846513] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.707 10:31:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:20.090 10:31:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.090 00:06:20.090 real 0m1.288s 00:06:20.090 user 0m1.197s 00:06:20.090 sys 0m0.102s 00:06:20.090 10:31:43 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.090 10:31:43 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:20.090 ************************************ 00:06:20.090 END TEST accel_copy 00:06:20.090 ************************************ 00:06:20.090 10:31:44 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.090 10:31:44 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:20.090 10:31:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.090 10:31:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.090 ************************************ 00:06:20.090 START TEST accel_fill 00:06:20.090 ************************************ 00:06:20.090 10:31:44 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.090 10:31:44 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:20.090 10:31:44 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:20.090 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.090 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.090 10:31:44 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:20.091 [2024-06-10 10:31:44.075704] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:20.091 [2024-06-10 10:31:44.075783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631018 ] 00:06:20.091 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.091 [2024-06-10 10:31:44.138482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.091 [2024-06-10 10:31:44.204067] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:20.091 10:31:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:21.476 10:31:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.476 00:06:21.476 real 0m1.285s 00:06:21.476 user 0m1.194s 00:06:21.476 sys 0m0.102s 00:06:21.477 10:31:45 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.477 10:31:45 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 ************************************ 00:06:21.477 END TEST accel_fill 00:06:21.477 ************************************ 00:06:21.477 10:31:45 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:21.477 10:31:45 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:21.477 10:31:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.477 10:31:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 ************************************ 00:06:21.477 START TEST accel_copy_crc32c 00:06:21.477 ************************************ 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:21.477 [2024-06-10 10:31:45.419424] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:21.477 [2024-06-10 10:31:45.419461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631365 ] 00:06:21.477 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.477 [2024-06-10 10:31:45.471350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.477 [2024-06-10 10:31:45.535574] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.477 10:31:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.419 00:06:22.419 real 0m1.256s 00:06:22.419 user 0m1.183s 00:06:22.419 sys 0m0.085s 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:22.419 10:31:46 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:22.419 ************************************ 00:06:22.419 END TEST accel_copy_crc32c 00:06:22.419 ************************************ 00:06:22.419 10:31:46 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.419 10:31:46 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:22.419 10:31:46 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:22.419 10:31:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.680 ************************************ 00:06:22.680 START TEST accel_copy_crc32c_C2 00:06:22.680 ************************************ 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:22.680 [2024-06-10 10:31:46.769047] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:22.680 [2024-06-10 10:31:46.769136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631720 ] 00:06:22.680 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.680 [2024-06-10 10:31:46.831616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.680 [2024-06-10 10:31:46.898433] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.680 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.681 10:31:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.121 00:06:24.121 real 0m1.288s 00:06:24.121 user 0m1.199s 00:06:24.121 sys 0m0.102s 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:24.121 10:31:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:24.121 ************************************ 00:06:24.121 END TEST accel_copy_crc32c_C2 00:06:24.121 ************************************ 00:06:24.121 10:31:48 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:24.121 10:31:48 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:24.121 10:31:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:24.121 10:31:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.121 ************************************ 00:06:24.121 START TEST accel_dualcast 00:06:24.121 ************************************ 00:06:24.121 10:31:48 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:24.121 [2024-06-10 10:31:48.131145] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:24.121 [2024-06-10 10:31:48.131209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631924 ] 00:06:24.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.121 [2024-06-10 10:31:48.194489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.121 [2024-06-10 10:31:48.264972] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.121 10:31:48 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:24.122 10:31:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:25.507 10:31:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.507 00:06:25.507 real 0m1.290s 00:06:25.507 user 0m1.200s 00:06:25.507 sys 0m0.102s 00:06:25.507 10:31:49 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.507 10:31:49 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:25.507 ************************************ 00:06:25.507 END TEST accel_dualcast 00:06:25.507 ************************************ 00:06:25.507 10:31:49 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:25.507 10:31:49 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:25.507 10:31:49 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.507 10:31:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.507 ************************************ 00:06:25.507 START TEST accel_compare 00:06:25.507 ************************************ 00:06:25.507 10:31:49 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:25.507 [2024-06-10 10:31:49.498212] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:25.507 [2024-06-10 10:31:49.498280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632121 ] 00:06:25.507 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.507 [2024-06-10 10:31:49.560550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.507 [2024-06-10 10:31:49.627576] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.507 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:25.508 10:31:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:26.893 10:31:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.893 00:06:26.893 real 0m1.288s 00:06:26.893 user 0m1.200s 00:06:26.893 sys 0m0.099s 00:06:26.893 10:31:50 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:26.893 10:31:50 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:26.893 ************************************ 00:06:26.893 END TEST accel_compare 00:06:26.893 ************************************ 00:06:26.893 10:31:50 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:26.893 10:31:50 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:26.893 10:31:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.893 10:31:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.893 ************************************ 00:06:26.893 START TEST accel_xor 00:06:26.893 ************************************ 00:06:26.893 10:31:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:26.893 10:31:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:26.893 [2024-06-10 10:31:50.858680] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:26.893 [2024-06-10 10:31:50.858742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632459 ] 00:06:26.893 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.893 [2024-06-10 10:31:50.920421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.893 [2024-06-10 10:31:50.984411] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.893 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.893 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.893 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.893 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.894 10:31:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:27.837 10:31:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.837 00:06:27.837 real 0m1.282s 00:06:27.837 user 0m1.192s 00:06:27.837 sys 0m0.101s 00:06:27.837 10:31:52 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.837 10:31:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:27.837 ************************************ 00:06:27.837 END TEST accel_xor 00:06:27.837 ************************************ 00:06:28.099 10:31:52 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:28.099 10:31:52 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:28.099 10:31:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:28.099 10:31:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.099 ************************************ 00:06:28.099 START TEST accel_xor 00:06:28.099 ************************************ 00:06:28.099 10:31:52 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:28.099 [2024-06-10 10:31:52.216502] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:28.099 [2024-06-10 10:31:52.216596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632808 ] 00:06:28.099 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.099 [2024-06-10 10:31:52.280448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.099 [2024-06-10 10:31:52.350968] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.099 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.359 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:28.360 10:31:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.302 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:29.303 10:31:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.303 00:06:29.303 real 0m1.294s 00:06:29.303 user 0m1.192s 00:06:29.303 sys 0m0.112s 00:06:29.303 10:31:53 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.303 10:31:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:29.303 ************************************ 00:06:29.303 END TEST accel_xor 00:06:29.303 ************************************ 00:06:29.303 10:31:53 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:29.303 10:31:53 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:29.303 10:31:53 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:29.303 10:31:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.303 ************************************ 00:06:29.303 START TEST accel_dif_verify 00:06:29.303 ************************************ 00:06:29.303 10:31:53 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:29.303 10:31:53 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:29.303 [2024-06-10 10:31:53.582047] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:29.303 [2024-06-10 10:31:53.582125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633155 ] 00:06:29.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.565 [2024-06-10 10:31:53.644689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.565 [2024-06-10 10:31:53.710582] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.565 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.566 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.566 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:29.566 10:31:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:29.566 10:31:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:29.566 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:29.566 10:31:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:30.954 10:31:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.954 00:06:30.954 real 0m1.286s 00:06:30.954 user 0m1.193s 00:06:30.954 sys 0m0.105s 00:06:30.954 10:31:54 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.954 10:31:54 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:30.954 ************************************ 00:06:30.954 END TEST accel_dif_verify 00:06:30.954 ************************************ 00:06:30.954 10:31:54 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:30.954 10:31:54 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:30.954 10:31:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.954 10:31:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.954 ************************************ 00:06:30.954 START TEST accel_dif_generate 00:06:30.954 ************************************ 00:06:30.954 10:31:54 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:30.954 10:31:54 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:30.954 [2024-06-10 10:31:54.943902] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:30.954 [2024-06-10 10:31:54.943968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633361 ] 00:06:30.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.954 [2024-06-10 10:31:55.006055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.954 [2024-06-10 10:31:55.072558] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.954 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.955 10:31:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.339 10:31:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:32.340 10:31:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.340 10:31:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:32.340 10:31:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.340 00:06:32.340 real 0m1.285s 00:06:32.340 user 0m1.200s 00:06:32.340 sys 0m0.099s 00:06:32.340 10:31:56 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.340 10:31:56 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:32.340 ************************************ 00:06:32.340 END TEST accel_dif_generate 00:06:32.340 ************************************ 00:06:32.340 10:31:56 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:32.340 10:31:56 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:32.340 10:31:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.340 10:31:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.340 ************************************ 00:06:32.340 START TEST accel_dif_generate_copy 00:06:32.340 ************************************ 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:32.340 [2024-06-10 10:31:56.288271] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:32.340 [2024-06-10 10:31:56.288308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633562 ] 00:06:32.340 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.340 [2024-06-10 10:31:56.340708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.340 [2024-06-10 10:31:56.405657] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.340 10:31:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.283 00:06:33.283 real 0m1.259s 00:06:33.283 user 0m1.178s 00:06:33.283 sys 0m0.092s 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:33.283 10:31:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:33.283 ************************************ 00:06:33.283 END TEST accel_dif_generate_copy 00:06:33.283 ************************************ 00:06:33.284 10:31:57 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:33.284 10:31:57 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.545 10:31:57 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:33.545 10:31:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.545 10:31:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.545 ************************************ 00:06:33.545 START TEST accel_comp 00:06:33.545 ************************************ 00:06:33.545 10:31:57 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:33.545 [2024-06-10 10:31:57.642095] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:33.545 [2024-06-10 10:31:57.642202] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633890 ] 00:06:33.545 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.545 [2024-06-10 10:31:57.710483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.545 [2024-06-10 10:31:57.776683] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.545 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:33.546 10:31:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:34.931 10:31:58 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.931 00:06:34.931 real 0m1.299s 00:06:34.931 user 0m1.199s 00:06:34.931 sys 0m0.111s 00:06:34.931 10:31:58 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.931 10:31:58 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:34.931 ************************************ 00:06:34.931 END TEST accel_comp 00:06:34.931 ************************************ 00:06:34.931 10:31:58 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.931 10:31:58 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:34.931 10:31:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.931 10:31:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.931 ************************************ 00:06:34.931 START TEST accel_decomp 00:06:34.931 ************************************ 00:06:34.931 10:31:58 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.931 10:31:58 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.932 10:31:58 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.932 10:31:58 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.932 10:31:58 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:34.932 10:31:58 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:34.932 [2024-06-10 10:31:58.993968] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:34.932 [2024-06-10 10:31:58.994002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634248 ] 00:06:34.932 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.932 [2024-06-10 10:31:59.045393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.932 [2024-06-10 10:31:59.109684] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.932 10:31:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.347 10:32:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.347 00:06:36.347 real 0m1.259s 00:06:36.347 user 0m1.186s 00:06:36.347 sys 0m0.086s 00:06:36.347 10:32:00 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:36.347 10:32:00 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:36.347 ************************************ 00:06:36.347 END TEST accel_decomp 00:06:36.347 ************************************ 00:06:36.347 10:32:00 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.347 10:32:00 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:36.347 10:32:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.347 10:32:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.347 ************************************ 00:06:36.347 START TEST accel_decomp_full 00:06:36.347 ************************************ 00:06:36.347 10:32:00 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:36.347 [2024-06-10 10:32:00.344950] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:36.347 [2024-06-10 10:32:00.345034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634595 ] 00:06:36.347 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.347 [2024-06-10 10:32:00.407539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.347 [2024-06-10 10:32:00.471181] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.347 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.348 10:32:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.734 10:32:01 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.734 00:06:37.734 real 0m1.295s 00:06:37.734 user 0m1.208s 00:06:37.734 sys 0m0.100s 00:06:37.734 10:32:01 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.734 10:32:01 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:37.734 ************************************ 00:06:37.734 END TEST accel_decomp_full 00:06:37.734 ************************************ 00:06:37.734 10:32:01 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.734 10:32:01 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:37.735 10:32:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.735 10:32:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.735 ************************************ 00:06:37.735 START TEST accel_decomp_mcore 00:06:37.735 ************************************ 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:37.735 [2024-06-10 10:32:01.717190] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:37.735 [2024-06-10 10:32:01.717295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634847 ] 00:06:37.735 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.735 [2024-06-10 10:32:01.782758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.735 [2024-06-10 10:32:01.859063] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.735 [2024-06-10 10:32:01.859179] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.735 [2024-06-10 10:32:01.859336] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.735 [2024-06-10 10:32:01.859470] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.735 10:32:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.131 00:06:39.131 real 0m1.313s 00:06:39.131 user 0m4.445s 00:06:39.131 sys 0m0.121s 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.131 10:32:02 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:39.131 ************************************ 00:06:39.131 END TEST accel_decomp_mcore 00:06:39.131 ************************************ 00:06:39.132 10:32:03 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.132 10:32:03 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:39.132 10:32:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.132 10:32:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.132 ************************************ 00:06:39.132 START TEST accel_decomp_full_mcore 00:06:39.132 ************************************ 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:39.132 [2024-06-10 10:32:03.104442] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:39.132 [2024-06-10 10:32:03.104508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635044 ] 00:06:39.132 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.132 [2024-06-10 10:32:03.168736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.132 [2024-06-10 10:32:03.241839] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.132 [2024-06-10 10:32:03.241953] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.132 [2024-06-10 10:32:03.242108] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.132 [2024-06-10 10:32:03.242108] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.132 10:32:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.562 00:06:40.562 real 0m1.318s 00:06:40.562 user 0m4.479s 00:06:40.562 sys 0m0.122s 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:40.562 10:32:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:40.562 ************************************ 00:06:40.562 END TEST accel_decomp_full_mcore 00:06:40.562 ************************************ 00:06:40.562 10:32:04 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.562 10:32:04 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:40.562 10:32:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.562 10:32:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.562 ************************************ 00:06:40.562 START TEST accel_decomp_mthread 00:06:40.562 ************************************ 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:40.562 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:40.562 [2024-06-10 10:32:04.497461] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:40.562 [2024-06-10 10:32:04.497541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635342 ] 00:06:40.562 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.562 [2024-06-10 10:32:04.560310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.563 [2024-06-10 10:32:04.625004] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.563 10:32:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.587 00:06:41.587 real 0m1.294s 00:06:41.587 user 0m1.194s 00:06:41.587 sys 0m0.112s 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.587 10:32:05 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:41.587 ************************************ 00:06:41.587 END TEST accel_decomp_mthread 00:06:41.587 ************************************ 00:06:41.587 10:32:05 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.587 10:32:05 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:41.587 10:32:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.587 10:32:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.587 ************************************ 00:06:41.587 START TEST accel_decomp_full_mthread 00:06:41.587 ************************************ 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:41.587 10:32:05 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:41.587 [2024-06-10 10:32:05.848273] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:41.587 [2024-06-10 10:32:05.848334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635695 ] 00:06:41.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.848 [2024-06-10 10:32:05.909095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.848 [2024-06-10 10:32:05.973938] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.848 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.848 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.848 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.848 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.848 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.848 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.848 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.849 10:32:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.231 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.231 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.231 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.231 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.231 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.231 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.231 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.231 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.231 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.232 00:06:43.232 real 0m1.313s 00:06:43.232 user 0m1.226s 00:06:43.232 sys 0m0.098s 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.232 10:32:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:43.232 ************************************ 00:06:43.232 END TEST accel_decomp_full_mthread 00:06:43.232 ************************************ 00:06:43.232 10:32:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:43.232 10:32:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.232 10:32:07 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:43.232 10:32:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:43.232 10:32:07 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:43.232 10:32:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.232 10:32:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.232 10:32:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.232 10:32:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.232 10:32:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.232 10:32:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.232 10:32:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:43.232 10:32:07 accel -- accel/accel.sh@41 -- # jq -r . 00:06:43.232 ************************************ 00:06:43.232 START TEST accel_dif_functional_tests 00:06:43.232 ************************************ 00:06:43.232 10:32:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:43.232 [2024-06-10 10:32:07.257845] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:43.232 [2024-06-10 10:32:07.257893] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid636051 ] 00:06:43.232 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.232 [2024-06-10 10:32:07.317688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.232 [2024-06-10 10:32:07.385875] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.232 [2024-06-10 10:32:07.385994] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.232 [2024-06-10 10:32:07.385997] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.232 00:06:43.232 00:06:43.232 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.232 http://cunit.sourceforge.net/ 00:06:43.232 00:06:43.232 00:06:43.232 Suite: accel_dif 00:06:43.232 Test: verify: DIF generated, GUARD check ...passed 00:06:43.232 Test: verify: DIF generated, APPTAG check ...passed 00:06:43.232 Test: verify: DIF generated, REFTAG check ...passed 00:06:43.232 Test: verify: DIF not generated, GUARD check ...[2024-06-10 10:32:07.441568] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.232 passed 00:06:43.232 Test: verify: DIF not generated, APPTAG check ...[2024-06-10 10:32:07.441612] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.232 passed 00:06:43.232 Test: verify: DIF not generated, REFTAG check ...[2024-06-10 10:32:07.441634] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.232 passed 00:06:43.232 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:43.232 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-10 10:32:07.441681] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:43.232 passed 00:06:43.232 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:43.232 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:43.232 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:43.232 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-10 10:32:07.441794] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:43.232 passed 00:06:43.232 Test: verify copy: DIF generated, GUARD check ...passed 00:06:43.232 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:43.232 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:43.232 Test: verify copy: DIF not generated, GUARD check ...[2024-06-10 10:32:07.441914] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:43.232 passed 00:06:43.232 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 10:32:07.441943] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:43.232 passed 00:06:43.233 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-10 10:32:07.441966] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:43.233 passed 00:06:43.233 Test: generate copy: DIF generated, GUARD check ...passed 00:06:43.233 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:43.233 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:43.233 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:43.233 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:43.233 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:43.233 Test: generate copy: iovecs-len validate ...[2024-06-10 10:32:07.442154] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:43.233 passed 00:06:43.233 Test: generate copy: buffer alignment validate ...passed 00:06:43.233 00:06:43.233 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.233 suites 1 1 n/a 0 0 00:06:43.233 tests 26 26 26 0 0 00:06:43.233 asserts 115 115 115 0 n/a 00:06:43.233 00:06:43.233 Elapsed time = 0.000 seconds 00:06:43.493 00:06:43.494 real 0m0.347s 00:06:43.494 user 0m0.482s 00:06:43.494 sys 0m0.128s 00:06:43.494 10:32:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.494 10:32:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:43.494 ************************************ 00:06:43.494 END TEST accel_dif_functional_tests 00:06:43.494 ************************************ 00:06:43.494 00:06:43.494 real 0m29.971s 00:06:43.494 user 0m33.538s 00:06:43.494 sys 0m4.120s 00:06:43.494 10:32:07 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.494 10:32:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.494 ************************************ 00:06:43.494 END TEST accel 00:06:43.494 ************************************ 00:06:43.494 10:32:07 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:43.494 10:32:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:43.494 10:32:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:43.494 10:32:07 -- common/autotest_common.sh@10 -- # set +x 00:06:43.494 ************************************ 00:06:43.494 START TEST accel_rpc 00:06:43.494 ************************************ 00:06:43.494 10:32:07 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:43.494 * Looking for test storage... 00:06:43.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:43.494 10:32:07 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.494 10:32:07 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=636120 00:06:43.494 10:32:07 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 636120 00:06:43.494 10:32:07 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:43.494 10:32:07 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 636120 ']' 00:06:43.494 10:32:07 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.494 10:32:07 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:43.494 10:32:07 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.494 10:32:07 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:43.494 10:32:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.754 [2024-06-10 10:32:07.813465] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:43.754 [2024-06-10 10:32:07.813538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid636120 ] 00:06:43.754 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.754 [2024-06-10 10:32:07.877820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.754 [2024-06-10 10:32:07.953998] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.326 10:32:08 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:44.326 10:32:08 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:44.326 10:32:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:44.326 10:32:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:44.326 10:32:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:44.326 10:32:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:44.326 10:32:08 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:44.326 10:32:08 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:44.326 10:32:08 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.326 10:32:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.326 ************************************ 00:06:44.326 START TEST accel_assign_opcode 00:06:44.326 ************************************ 00:06:44.326 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:44.326 10:32:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:44.326 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.326 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.326 [2024-06-10 10:32:08.583857] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:44.326 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.326 10:32:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:44.326 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.327 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.327 [2024-06-10 10:32:08.595881] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:44.327 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.327 10:32:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:44.327 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.327 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.587 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.587 10:32:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:44.587 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.587 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.587 10:32:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:44.587 10:32:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:44.587 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.587 software 00:06:44.587 00:06:44.587 real 0m0.213s 00:06:44.587 user 0m0.046s 00:06:44.587 sys 0m0.010s 00:06:44.587 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.587 10:32:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:44.587 ************************************ 00:06:44.587 END TEST accel_assign_opcode 00:06:44.587 ************************************ 00:06:44.587 10:32:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 636120 00:06:44.587 10:32:08 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 636120 ']' 00:06:44.587 10:32:08 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 636120 00:06:44.587 10:32:08 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:44.587 10:32:08 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:44.587 10:32:08 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 636120 00:06:44.848 10:32:08 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:44.848 10:32:08 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:44.848 10:32:08 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 636120' 00:06:44.848 killing process with pid 636120 00:06:44.848 10:32:08 accel_rpc -- common/autotest_common.sh@968 -- # kill 636120 00:06:44.848 10:32:08 accel_rpc -- common/autotest_common.sh@973 -- # wait 636120 00:06:44.848 00:06:44.848 real 0m1.428s 00:06:44.848 user 0m1.492s 00:06:44.848 sys 0m0.389s 00:06:44.848 10:32:09 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.848 10:32:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.848 ************************************ 00:06:44.848 END TEST accel_rpc 00:06:44.848 ************************************ 00:06:44.848 10:32:09 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:44.848 10:32:09 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:44.848 10:32:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.848 10:32:09 -- common/autotest_common.sh@10 -- # set +x 00:06:45.110 ************************************ 00:06:45.110 START TEST app_cmdline 00:06:45.110 ************************************ 00:06:45.110 10:32:09 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:45.110 * Looking for test storage... 00:06:45.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.110 10:32:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:45.110 10:32:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=636529 00:06:45.110 10:32:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 636529 00:06:45.110 10:32:09 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:45.110 10:32:09 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 636529 ']' 00:06:45.110 10:32:09 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.110 10:32:09 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:45.110 10:32:09 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.110 10:32:09 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:45.110 10:32:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.110 [2024-06-10 10:32:09.331194] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:06:45.110 [2024-06-10 10:32:09.331275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid636529 ] 00:06:45.110 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.370 [2024-06-10 10:32:09.398083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.370 [2024-06-10 10:32:09.471750] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.942 10:32:10 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:45.942 10:32:10 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:45.942 10:32:10 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:46.203 { 00:06:46.203 "version": "SPDK v24.09-pre git sha1 bab0baf30", 00:06:46.203 "fields": { 00:06:46.203 "major": 24, 00:06:46.203 "minor": 9, 00:06:46.203 "patch": 0, 00:06:46.203 "suffix": "-pre", 00:06:46.203 "commit": "bab0baf30" 00:06:46.203 } 00:06:46.203 } 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.203 request: 00:06:46.203 { 00:06:46.203 "method": "env_dpdk_get_mem_stats", 00:06:46.203 "req_id": 1 00:06:46.203 } 00:06:46.203 Got JSON-RPC error response 00:06:46.203 response: 00:06:46.203 { 00:06:46.203 "code": -32601, 00:06:46.203 "message": "Method not found" 00:06:46.203 } 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:46.203 10:32:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 636529 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 636529 ']' 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 636529 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:46.203 10:32:10 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 636529 00:06:46.464 10:32:10 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:46.464 10:32:10 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:46.464 10:32:10 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 636529' 00:06:46.464 killing process with pid 636529 00:06:46.464 10:32:10 app_cmdline -- common/autotest_common.sh@968 -- # kill 636529 00:06:46.464 10:32:10 app_cmdline -- common/autotest_common.sh@973 -- # wait 636529 00:06:46.464 00:06:46.464 real 0m1.564s 00:06:46.464 user 0m1.867s 00:06:46.464 sys 0m0.419s 00:06:46.464 10:32:10 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.464 10:32:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.464 ************************************ 00:06:46.464 END TEST app_cmdline 00:06:46.464 ************************************ 00:06:46.725 10:32:10 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:46.725 10:32:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:46.725 10:32:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.725 10:32:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.725 ************************************ 00:06:46.725 START TEST version 00:06:46.725 ************************************ 00:06:46.725 10:32:10 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:46.725 * Looking for test storage... 00:06:46.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:46.725 10:32:10 version -- app/version.sh@17 -- # get_header_version major 00:06:46.725 10:32:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.725 10:32:10 version -- app/version.sh@14 -- # cut -f2 00:06:46.725 10:32:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.725 10:32:10 version -- app/version.sh@17 -- # major=24 00:06:46.725 10:32:10 version -- app/version.sh@18 -- # get_header_version minor 00:06:46.725 10:32:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.725 10:32:10 version -- app/version.sh@14 -- # cut -f2 00:06:46.726 10:32:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.726 10:32:10 version -- app/version.sh@18 -- # minor=9 00:06:46.726 10:32:10 version -- app/version.sh@19 -- # get_header_version patch 00:06:46.726 10:32:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.726 10:32:10 version -- app/version.sh@14 -- # cut -f2 00:06:46.726 10:32:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.726 10:32:10 version -- app/version.sh@19 -- # patch=0 00:06:46.726 10:32:10 version -- app/version.sh@20 -- # get_header_version suffix 00:06:46.726 10:32:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:46.726 10:32:10 version -- app/version.sh@14 -- # cut -f2 00:06:46.726 10:32:10 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.726 10:32:10 version -- app/version.sh@20 -- # suffix=-pre 00:06:46.726 10:32:10 version -- app/version.sh@22 -- # version=24.9 00:06:46.726 10:32:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:46.726 10:32:10 version -- app/version.sh@28 -- # version=24.9rc0 00:06:46.726 10:32:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:46.726 10:32:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:46.726 10:32:10 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:46.726 10:32:10 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:46.726 00:06:46.726 real 0m0.168s 00:06:46.726 user 0m0.090s 00:06:46.726 sys 0m0.117s 00:06:46.726 10:32:10 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.726 10:32:10 version -- common/autotest_common.sh@10 -- # set +x 00:06:46.726 ************************************ 00:06:46.726 END TEST version 00:06:46.726 ************************************ 00:06:46.726 10:32:11 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:46.726 10:32:11 -- spdk/autotest.sh@198 -- # uname -s 00:06:46.987 10:32:11 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:46.987 10:32:11 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:46.987 10:32:11 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:46.987 10:32:11 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:46.987 10:32:11 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:46.987 10:32:11 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:46.987 10:32:11 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:46.987 10:32:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.987 10:32:11 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:46.987 10:32:11 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:46.987 10:32:11 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:46.987 10:32:11 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:46.987 10:32:11 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:46.987 10:32:11 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:46.987 10:32:11 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:46.987 10:32:11 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:46.987 10:32:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.987 10:32:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.987 ************************************ 00:06:46.987 START TEST nvmf_tcp 00:06:46.987 ************************************ 00:06:46.987 10:32:11 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:46.987 * Looking for test storage... 00:06:46.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.987 10:32:11 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.987 10:32:11 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.987 10:32:11 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.988 10:32:11 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.988 10:32:11 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.988 10:32:11 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.988 10:32:11 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.988 10:32:11 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:46.988 10:32:11 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:46.988 10:32:11 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:46.988 10:32:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:46.988 10:32:11 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:46.988 10:32:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:46.988 10:32:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.988 10:32:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.988 ************************************ 00:06:46.988 START TEST nvmf_example 00:06:46.988 ************************************ 00:06:46.988 10:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:47.250 * Looking for test storage... 00:06:47.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:47.250 10:32:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:55.399 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:55.399 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.399 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:55.400 Found net devices under 0000:31:00.0: cvl_0_0 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:55.400 Found net devices under 0000:31:00.1: cvl_0_1 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:55.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:06:55.400 00:06:55.400 --- 10.0.0.2 ping statistics --- 00:06:55.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.400 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:06:55.400 00:06:55.400 --- 10.0.0.1 ping statistics --- 00:06:55.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.400 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=640909 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 640909 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 640909 ']' 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:55.400 10:32:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.400 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:55.400 10:32:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:55.400 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.632 Initializing NVMe Controllers 00:07:07.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:07.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:07.632 Initialization complete. Launching workers. 00:07:07.632 ======================================================== 00:07:07.632 Latency(us) 00:07:07.632 Device Information : IOPS MiB/s Average min max 00:07:07.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18710.19 73.09 3420.11 598.15 16141.67 00:07:07.632 ======================================================== 00:07:07.632 Total : 18710.19 73.09 3420.11 598.15 16141.67 00:07:07.632 00:07:07.632 10:32:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:07.632 10:32:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:07.632 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.632 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:07.632 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.632 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:07.632 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.632 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.632 rmmod nvme_tcp 00:07:07.632 rmmod nvme_fabrics 00:07:07.632 rmmod nvme_keyring 00:07:07.632 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.633 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:07.633 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:07.633 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 640909 ']' 00:07:07.633 10:32:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 640909 00:07:07.633 10:32:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 640909 ']' 00:07:07.633 10:32:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 640909 00:07:07.633 10:32:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:07:07.633 10:32:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:07.633 10:32:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 640909 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 640909' 00:07:07.633 killing process with pid 640909 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 640909 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 640909 00:07:07.633 nvmf threads initialize successfully 00:07:07.633 bdev subsystem init successfully 00:07:07.633 created a nvmf target service 00:07:07.633 create targets's poll groups done 00:07:07.633 all subsystems of target started 00:07:07.633 nvmf target is running 00:07:07.633 all subsystems of target stopped 00:07:07.633 destroy targets's poll groups done 00:07:07.633 destroyed the nvmf target service 00:07:07.633 bdev subsystem finish successfully 00:07:07.633 nvmf threads destroy successfully 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.633 10:32:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.205 10:32:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:08.205 10:32:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:08.205 10:32:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:08.205 10:32:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 00:07:08.205 real 0m21.016s 00:07:08.205 user 0m46.627s 00:07:08.205 sys 0m6.412s 00:07:08.205 10:32:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:08.205 10:32:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 ************************************ 00:07:08.205 END TEST nvmf_example 00:07:08.205 ************************************ 00:07:08.205 10:32:32 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:08.205 10:32:32 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:08.205 10:32:32 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:08.205 10:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.205 ************************************ 00:07:08.205 START TEST nvmf_filesystem 00:07:08.205 ************************************ 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:08.205 * Looking for test storage... 00:07:08.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:08.205 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:08.206 #define SPDK_CONFIG_H 00:07:08.206 #define SPDK_CONFIG_APPS 1 00:07:08.206 #define SPDK_CONFIG_ARCH native 00:07:08.206 #undef SPDK_CONFIG_ASAN 00:07:08.206 #undef SPDK_CONFIG_AVAHI 00:07:08.206 #undef SPDK_CONFIG_CET 00:07:08.206 #define SPDK_CONFIG_COVERAGE 1 00:07:08.206 #define SPDK_CONFIG_CROSS_PREFIX 00:07:08.206 #undef SPDK_CONFIG_CRYPTO 00:07:08.206 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:08.206 #undef SPDK_CONFIG_CUSTOMOCF 00:07:08.206 #undef SPDK_CONFIG_DAOS 00:07:08.206 #define SPDK_CONFIG_DAOS_DIR 00:07:08.206 #define SPDK_CONFIG_DEBUG 1 00:07:08.206 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:08.206 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:08.206 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:08.206 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:08.206 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:08.206 #undef SPDK_CONFIG_DPDK_UADK 00:07:08.206 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:08.206 #define SPDK_CONFIG_EXAMPLES 1 00:07:08.206 #undef SPDK_CONFIG_FC 00:07:08.206 #define SPDK_CONFIG_FC_PATH 00:07:08.206 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:08.206 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:08.206 #undef SPDK_CONFIG_FUSE 00:07:08.206 #undef SPDK_CONFIG_FUZZER 00:07:08.206 #define SPDK_CONFIG_FUZZER_LIB 00:07:08.206 #undef SPDK_CONFIG_GOLANG 00:07:08.206 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:08.206 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:08.206 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:08.206 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:08.206 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:08.206 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:08.206 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:08.206 #define SPDK_CONFIG_IDXD 1 00:07:08.206 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:08.206 #undef SPDK_CONFIG_IPSEC_MB 00:07:08.206 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:08.206 #define SPDK_CONFIG_ISAL 1 00:07:08.206 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:08.206 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:08.206 #define SPDK_CONFIG_LIBDIR 00:07:08.206 #undef SPDK_CONFIG_LTO 00:07:08.206 #define SPDK_CONFIG_MAX_LCORES 00:07:08.206 #define SPDK_CONFIG_NVME_CUSE 1 00:07:08.206 #undef SPDK_CONFIG_OCF 00:07:08.206 #define SPDK_CONFIG_OCF_PATH 00:07:08.206 #define SPDK_CONFIG_OPENSSL_PATH 00:07:08.206 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:08.206 #define SPDK_CONFIG_PGO_DIR 00:07:08.206 #undef SPDK_CONFIG_PGO_USE 00:07:08.206 #define SPDK_CONFIG_PREFIX /usr/local 00:07:08.206 #undef SPDK_CONFIG_RAID5F 00:07:08.206 #undef SPDK_CONFIG_RBD 00:07:08.206 #define SPDK_CONFIG_RDMA 1 00:07:08.206 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:08.206 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:08.206 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:08.206 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:08.206 #define SPDK_CONFIG_SHARED 1 00:07:08.206 #undef SPDK_CONFIG_SMA 00:07:08.206 #define SPDK_CONFIG_TESTS 1 00:07:08.206 #undef SPDK_CONFIG_TSAN 00:07:08.206 #define SPDK_CONFIG_UBLK 1 00:07:08.206 #define SPDK_CONFIG_UBSAN 1 00:07:08.206 #undef SPDK_CONFIG_UNIT_TESTS 00:07:08.206 #undef SPDK_CONFIG_URING 00:07:08.206 #define SPDK_CONFIG_URING_PATH 00:07:08.206 #undef SPDK_CONFIG_URING_ZNS 00:07:08.206 #undef SPDK_CONFIG_USDT 00:07:08.206 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:08.206 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:08.206 #define SPDK_CONFIG_VFIO_USER 1 00:07:08.206 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:08.206 #define SPDK_CONFIG_VHOST 1 00:07:08.206 #define SPDK_CONFIG_VIRTIO 1 00:07:08.206 #undef SPDK_CONFIG_VTUNE 00:07:08.206 #define SPDK_CONFIG_VTUNE_DIR 00:07:08.206 #define SPDK_CONFIG_WERROR 1 00:07:08.206 #define SPDK_CONFIG_WPDK_DIR 00:07:08.206 #undef SPDK_CONFIG_XNVME 00:07:08.206 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:08.206 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:08.469 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:08.470 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 643777 ]] 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 643777 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.v8HnHL 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.v8HnHL/tests/target /tmp/spdk.v8HnHL 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=957403136 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4327026688 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122985041920 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370996736 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6385954816 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680787968 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685498368 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864454144 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9748480 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684331008 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685498368 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1167360 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:08.471 * Looking for test storage... 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122985041920 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8600547328 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.471 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:08.472 10:32:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.643 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:16.644 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:16.644 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:16.644 Found net devices under 0000:31:00.0: cvl_0_0 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:16.644 Found net devices under 0000:31:00.1: cvl_0_1 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.644 10:32:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:16.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.730 ms 00:07:16.644 00:07:16.644 --- 10.0.0.2 ping statistics --- 00:07:16.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.644 rtt min/avg/max/mdev = 0.730/0.730/0.730/0.000 ms 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:07:16.644 00:07:16.644 --- 10.0.0.1 ping statistics --- 00:07:16.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.644 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.644 ************************************ 00:07:16.644 START TEST nvmf_filesystem_no_in_capsule 00:07:16.644 ************************************ 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:16.644 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=647499 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 647499 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 647499 ']' 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:16.645 [2024-06-10 10:32:40.149786] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:07:16.645 [2024-06-10 10:32:40.149868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.645 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.645 [2024-06-10 10:32:40.224837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.645 [2024-06-10 10:32:40.302155] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.645 [2024-06-10 10:32:40.302192] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.645 [2024-06-10 10:32:40.302199] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.645 [2024-06-10 10:32:40.302206] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.645 [2024-06-10 10:32:40.302211] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.645 [2024-06-10 10:32:40.302351] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.645 [2024-06-10 10:32:40.302504] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.645 [2024-06-10 10:32:40.302664] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.645 [2024-06-10 10:32:40.302664] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:16.645 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.906 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:16.906 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:16.906 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:16.906 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 [2024-06-10 10:32:40.963741] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.906 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:16.906 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:16.906 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:16.906 10:32:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 Malloc1 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 [2024-06-10 10:32:41.103350] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:16.906 [2024-06-10 10:32:41.103588] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:16.906 { 00:07:16.906 "name": "Malloc1", 00:07:16.906 "aliases": [ 00:07:16.906 "aef51ec4-352c-4b6a-95e4-8380f87ac3f0" 00:07:16.906 ], 00:07:16.906 "product_name": "Malloc disk", 00:07:16.906 "block_size": 512, 00:07:16.906 "num_blocks": 1048576, 00:07:16.906 "uuid": "aef51ec4-352c-4b6a-95e4-8380f87ac3f0", 00:07:16.906 "assigned_rate_limits": { 00:07:16.906 "rw_ios_per_sec": 0, 00:07:16.906 "rw_mbytes_per_sec": 0, 00:07:16.906 "r_mbytes_per_sec": 0, 00:07:16.906 "w_mbytes_per_sec": 0 00:07:16.906 }, 00:07:16.906 "claimed": true, 00:07:16.906 "claim_type": "exclusive_write", 00:07:16.906 "zoned": false, 00:07:16.906 "supported_io_types": { 00:07:16.906 "read": true, 00:07:16.906 "write": true, 00:07:16.906 "unmap": true, 00:07:16.906 "write_zeroes": true, 00:07:16.906 "flush": true, 00:07:16.906 "reset": true, 00:07:16.906 "compare": false, 00:07:16.906 "compare_and_write": false, 00:07:16.906 "abort": true, 00:07:16.906 "nvme_admin": false, 00:07:16.906 "nvme_io": false 00:07:16.906 }, 00:07:16.906 "memory_domains": [ 00:07:16.906 { 00:07:16.906 "dma_device_id": "system", 00:07:16.906 "dma_device_type": 1 00:07:16.906 }, 00:07:16.906 { 00:07:16.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.906 "dma_device_type": 2 00:07:16.906 } 00:07:16.906 ], 00:07:16.906 "driver_specific": {} 00:07:16.906 } 00:07:16.906 ]' 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:16.906 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:17.167 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:17.167 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:17.167 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:17.167 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:17.167 10:32:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:18.549 10:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.549 10:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:18.549 10:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.549 10:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:18.549 10:32:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:21.094 10:32:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:21.094 10:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:21.094 10:32:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:22.479 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:22.479 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:22.479 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:22.479 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.479 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.479 ************************************ 00:07:22.480 START TEST filesystem_ext4 00:07:22.480 ************************************ 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:22.480 mke2fs 1.46.5 (30-Dec-2021) 00:07:22.480 Discarding device blocks: 0/522240 done 00:07:22.480 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:22.480 Filesystem UUID: 1db948f0-c8a9-45bc-8ba1-4f594982ab58 00:07:22.480 Superblock backups stored on blocks: 00:07:22.480 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:22.480 00:07:22.480 Allocating group tables: 0/64 done 00:07:22.480 Writing inode tables: 0/64 done 00:07:22.480 Creating journal (8192 blocks): done 00:07:22.480 Writing superblocks and filesystem accounting information: 0/64 done 00:07:22.480 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:22.480 10:32:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:23.052 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:23.052 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:23.052 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:23.052 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:23.052 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:23.052 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:23.052 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 647499 00:07:23.052 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:23.052 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:23.053 00:07:23.053 real 0m0.865s 00:07:23.053 user 0m0.021s 00:07:23.053 sys 0m0.053s 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:23.053 ************************************ 00:07:23.053 END TEST filesystem_ext4 00:07:23.053 ************************************ 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.053 ************************************ 00:07:23.053 START TEST filesystem_btrfs 00:07:23.053 ************************************ 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:23.053 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:23.314 btrfs-progs v6.6.2 00:07:23.314 See https://btrfs.readthedocs.io for more information. 00:07:23.314 00:07:23.314 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:23.314 NOTE: several default settings have changed in version 5.15, please make sure 00:07:23.314 this does not affect your deployments: 00:07:23.314 - DUP for metadata (-m dup) 00:07:23.314 - enabled no-holes (-O no-holes) 00:07:23.314 - enabled free-space-tree (-R free-space-tree) 00:07:23.314 00:07:23.314 Label: (null) 00:07:23.314 UUID: f21a47bf-5ba8-4b4f-bd0f-7aedd9435495 00:07:23.315 Node size: 16384 00:07:23.315 Sector size: 4096 00:07:23.315 Filesystem size: 510.00MiB 00:07:23.315 Block group profiles: 00:07:23.315 Data: single 8.00MiB 00:07:23.315 Metadata: DUP 32.00MiB 00:07:23.315 System: DUP 8.00MiB 00:07:23.315 SSD detected: yes 00:07:23.315 Zoned device: no 00:07:23.315 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:23.315 Runtime features: free-space-tree 00:07:23.315 Checksum: crc32c 00:07:23.315 Number of devices: 1 00:07:23.315 Devices: 00:07:23.315 ID SIZE PATH 00:07:23.315 1 510.00MiB /dev/nvme0n1p1 00:07:23.315 00:07:23.315 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:23.315 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 647499 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:23.576 00:07:23.576 real 0m0.545s 00:07:23.576 user 0m0.017s 00:07:23.576 sys 0m0.072s 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:23.576 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:23.576 ************************************ 00:07:23.576 END TEST filesystem_btrfs 00:07:23.576 ************************************ 00:07:23.836 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:23.836 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:23.836 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:23.836 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.836 ************************************ 00:07:23.836 START TEST filesystem_xfs 00:07:23.836 ************************************ 00:07:23.836 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:23.836 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:23.836 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.836 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:23.837 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:23.837 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:23.837 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:23.837 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:23.837 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:23.837 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:23.837 10:32:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:23.837 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:23.837 = sectsz=512 attr=2, projid32bit=1 00:07:23.837 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:23.837 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:23.837 data = bsize=4096 blocks=130560, imaxpct=25 00:07:23.837 = sunit=0 swidth=0 blks 00:07:23.837 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:23.837 log =internal log bsize=4096 blocks=16384, version=2 00:07:23.837 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:23.837 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:24.863 Discarding blocks...Done. 00:07:24.863 10:32:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:24.863 10:32:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 647499 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.798 00:07:26.798 real 0m2.894s 00:07:26.798 user 0m0.022s 00:07:26.798 sys 0m0.058s 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:26.798 ************************************ 00:07:26.798 END TEST filesystem_xfs 00:07:26.798 ************************************ 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:26.798 10:32:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:27.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 647499 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 647499 ']' 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 647499 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 647499 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 647499' 00:07:27.059 killing process with pid 647499 00:07:27.059 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 647499 00:07:27.060 [2024-06-10 10:32:51.209170] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:27.060 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 647499 00:07:27.320 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:27.320 00:07:27.320 real 0m11.361s 00:07:27.320 user 0m44.659s 00:07:27.320 sys 0m1.043s 00:07:27.320 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:27.320 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.320 ************************************ 00:07:27.320 END TEST nvmf_filesystem_no_in_capsule 00:07:27.320 ************************************ 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.321 ************************************ 00:07:27.321 START TEST nvmf_filesystem_in_capsule 00:07:27.321 ************************************ 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=650073 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 650073 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 650073 ']' 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:27.321 10:32:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.321 [2024-06-10 10:32:51.578399] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:07:27.321 [2024-06-10 10:32:51.578446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.581 [2024-06-10 10:32:51.643844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.581 [2024-06-10 10:32:51.709360] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.581 [2024-06-10 10:32:51.709396] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.581 [2024-06-10 10:32:51.709403] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.581 [2024-06-10 10:32:51.709409] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.581 [2024-06-10 10:32:51.709415] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.581 [2024-06-10 10:32:51.709550] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.581 [2024-06-10 10:32:51.709662] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.581 [2024-06-10 10:32:51.709815] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.581 [2024-06-10 10:32:51.709816] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.153 [2024-06-10 10:32:52.399929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.153 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.414 Malloc1 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.414 [2024-06-10 10:32:52.533367] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:28.414 [2024-06-10 10:32:52.533622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:28.414 { 00:07:28.414 "name": "Malloc1", 00:07:28.414 "aliases": [ 00:07:28.414 "54230ac5-da4d-43d8-8807-0e745fc696e1" 00:07:28.414 ], 00:07:28.414 "product_name": "Malloc disk", 00:07:28.414 "block_size": 512, 00:07:28.414 "num_blocks": 1048576, 00:07:28.414 "uuid": "54230ac5-da4d-43d8-8807-0e745fc696e1", 00:07:28.414 "assigned_rate_limits": { 00:07:28.414 "rw_ios_per_sec": 0, 00:07:28.414 "rw_mbytes_per_sec": 0, 00:07:28.414 "r_mbytes_per_sec": 0, 00:07:28.414 "w_mbytes_per_sec": 0 00:07:28.414 }, 00:07:28.414 "claimed": true, 00:07:28.414 "claim_type": "exclusive_write", 00:07:28.414 "zoned": false, 00:07:28.414 "supported_io_types": { 00:07:28.414 "read": true, 00:07:28.414 "write": true, 00:07:28.414 "unmap": true, 00:07:28.414 "write_zeroes": true, 00:07:28.414 "flush": true, 00:07:28.414 "reset": true, 00:07:28.414 "compare": false, 00:07:28.414 "compare_and_write": false, 00:07:28.414 "abort": true, 00:07:28.414 "nvme_admin": false, 00:07:28.414 "nvme_io": false 00:07:28.414 }, 00:07:28.414 "memory_domains": [ 00:07:28.414 { 00:07:28.414 "dma_device_id": "system", 00:07:28.414 "dma_device_type": 1 00:07:28.414 }, 00:07:28.414 { 00:07:28.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.414 "dma_device_type": 2 00:07:28.414 } 00:07:28.414 ], 00:07:28.414 "driver_specific": {} 00:07:28.414 } 00:07:28.414 ]' 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:28.414 10:32:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:30.326 10:32:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:30.326 10:32:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:30.326 10:32:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:30.326 10:32:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:30.326 10:32:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:32.240 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:32.501 10:32:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.885 ************************************ 00:07:33.885 START TEST filesystem_in_capsule_ext4 00:07:33.885 ************************************ 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:33.885 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:33.886 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:33.886 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:33.886 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:33.886 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:33.886 10:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:33.886 mke2fs 1.46.5 (30-Dec-2021) 00:07:33.886 Discarding device blocks: 0/522240 done 00:07:33.886 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:33.886 Filesystem UUID: e98091ff-3ec8-4303-be76-009d6332eb54 00:07:33.886 Superblock backups stored on blocks: 00:07:33.886 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:33.886 00:07:33.886 Allocating group tables: 0/64 done 00:07:33.886 Writing inode tables: 0/64 done 00:07:34.146 Creating journal (8192 blocks): done 00:07:35.088 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:07:35.088 00:07:35.088 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:35.088 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.088 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 650073 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.349 00:07:35.349 real 0m1.619s 00:07:35.349 user 0m0.027s 00:07:35.349 sys 0m0.047s 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:35.349 ************************************ 00:07:35.349 END TEST filesystem_in_capsule_ext4 00:07:35.349 ************************************ 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.349 ************************************ 00:07:35.349 START TEST filesystem_in_capsule_btrfs 00:07:35.349 ************************************ 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:35.349 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:35.609 btrfs-progs v6.6.2 00:07:35.609 See https://btrfs.readthedocs.io for more information. 00:07:35.609 00:07:35.609 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:35.609 NOTE: several default settings have changed in version 5.15, please make sure 00:07:35.609 this does not affect your deployments: 00:07:35.609 - DUP for metadata (-m dup) 00:07:35.609 - enabled no-holes (-O no-holes) 00:07:35.610 - enabled free-space-tree (-R free-space-tree) 00:07:35.610 00:07:35.610 Label: (null) 00:07:35.610 UUID: d7c9936f-2b0e-4cfa-bbec-99b2d79e3acb 00:07:35.610 Node size: 16384 00:07:35.610 Sector size: 4096 00:07:35.610 Filesystem size: 510.00MiB 00:07:35.610 Block group profiles: 00:07:35.610 Data: single 8.00MiB 00:07:35.610 Metadata: DUP 32.00MiB 00:07:35.610 System: DUP 8.00MiB 00:07:35.610 SSD detected: yes 00:07:35.610 Zoned device: no 00:07:35.610 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:35.610 Runtime features: free-space-tree 00:07:35.610 Checksum: crc32c 00:07:35.610 Number of devices: 1 00:07:35.610 Devices: 00:07:35.610 ID SIZE PATH 00:07:35.610 1 510.00MiB /dev/nvme0n1p1 00:07:35.610 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 650073 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.610 00:07:35.610 real 0m0.295s 00:07:35.610 user 0m0.026s 00:07:35.610 sys 0m0.058s 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:35.610 ************************************ 00:07:35.610 END TEST filesystem_in_capsule_btrfs 00:07:35.610 ************************************ 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.610 ************************************ 00:07:35.610 START TEST filesystem_in_capsule_xfs 00:07:35.610 ************************************ 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:35.610 10:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:35.871 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:35.871 = sectsz=512 attr=2, projid32bit=1 00:07:35.871 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:35.871 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:35.871 data = bsize=4096 blocks=130560, imaxpct=25 00:07:35.871 = sunit=0 swidth=0 blks 00:07:35.871 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:35.871 log =internal log bsize=4096 blocks=16384, version=2 00:07:35.871 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:35.871 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:36.442 Discarding blocks...Done. 00:07:36.442 10:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:36.442 10:33:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.354 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.354 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:38.354 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.354 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:38.354 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:38.354 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.354 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 650073 00:07:38.614 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.614 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.614 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.614 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.614 00:07:38.614 real 0m2.773s 00:07:38.614 user 0m0.025s 00:07:38.614 sys 0m0.053s 00:07:38.614 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.614 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:38.614 ************************************ 00:07:38.614 END TEST filesystem_in_capsule_xfs 00:07:38.614 ************************************ 00:07:38.614 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:38.614 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:38.614 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:38.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 650073 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 650073 ']' 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 650073 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:38.875 10:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 650073 00:07:38.875 10:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:38.875 10:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:38.875 10:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 650073' 00:07:38.875 killing process with pid 650073 00:07:38.875 10:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 650073 00:07:38.875 [2024-06-10 10:33:03.004459] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:38.875 10:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 650073 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:39.137 00:07:39.137 real 0m11.719s 00:07:39.137 user 0m46.148s 00:07:39.137 sys 0m1.015s 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.137 ************************************ 00:07:39.137 END TEST nvmf_filesystem_in_capsule 00:07:39.137 ************************************ 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.137 rmmod nvme_tcp 00:07:39.137 rmmod nvme_fabrics 00:07:39.137 rmmod nvme_keyring 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.137 10:33:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.684 10:33:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.684 00:07:41.684 real 0m33.107s 00:07:41.684 user 1m33.037s 00:07:41.684 sys 0m7.736s 00:07:41.684 10:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:41.684 10:33:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.684 ************************************ 00:07:41.684 END TEST nvmf_filesystem 00:07:41.684 ************************************ 00:07:41.684 10:33:05 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:41.684 10:33:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:41.684 10:33:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:41.684 10:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.684 ************************************ 00:07:41.684 START TEST nvmf_target_discovery 00:07:41.684 ************************************ 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:41.685 * Looking for test storage... 00:07:41.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:41.685 10:33:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:49.831 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:49.831 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:49.831 Found net devices under 0000:31:00.0: cvl_0_0 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:49.831 Found net devices under 0000:31:00.1: cvl_0_1 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:49.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:07:49.831 00:07:49.831 --- 10.0.0.2 ping statistics --- 00:07:49.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.831 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:07:49.831 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:07:49.832 00:07:49.832 --- 10.0.0.1 ping statistics --- 00:07:49.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.832 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=657262 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 657262 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 657262 ']' 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:49.832 10:33:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 [2024-06-10 10:33:12.975900] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:07:49.832 [2024-06-10 10:33:12.975947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.832 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.832 [2024-06-10 10:33:13.041898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.832 [2024-06-10 10:33:13.107189] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.832 [2024-06-10 10:33:13.107227] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.832 [2024-06-10 10:33:13.107234] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.832 [2024-06-10 10:33:13.107240] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.832 [2024-06-10 10:33:13.107251] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.832 [2024-06-10 10:33:13.107345] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.832 [2024-06-10 10:33:13.107458] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.832 [2024-06-10 10:33:13.107614] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.832 [2024-06-10 10:33:13.107614] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 [2024-06-10 10:33:13.797843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 Null1 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 [2024-06-10 10:33:13.857950] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:49.832 [2024-06-10 10:33:13.858188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 Null2 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 Null3 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.832 Null4 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:49.832 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.833 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.833 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.833 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:49.833 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.833 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.833 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.833 10:33:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:49.833 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.833 10:33:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:49.833 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:07:50.094 00:07:50.094 Discovery Log Number of Records 6, Generation counter 6 00:07:50.094 =====Discovery Log Entry 0====== 00:07:50.094 trtype: tcp 00:07:50.094 adrfam: ipv4 00:07:50.094 subtype: current discovery subsystem 00:07:50.094 treq: not required 00:07:50.094 portid: 0 00:07:50.094 trsvcid: 4420 00:07:50.094 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:50.094 traddr: 10.0.0.2 00:07:50.094 eflags: explicit discovery connections, duplicate discovery information 00:07:50.094 sectype: none 00:07:50.094 =====Discovery Log Entry 1====== 00:07:50.094 trtype: tcp 00:07:50.094 adrfam: ipv4 00:07:50.094 subtype: nvme subsystem 00:07:50.094 treq: not required 00:07:50.094 portid: 0 00:07:50.094 trsvcid: 4420 00:07:50.094 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:50.094 traddr: 10.0.0.2 00:07:50.094 eflags: none 00:07:50.094 sectype: none 00:07:50.094 =====Discovery Log Entry 2====== 00:07:50.094 trtype: tcp 00:07:50.094 adrfam: ipv4 00:07:50.094 subtype: nvme subsystem 00:07:50.094 treq: not required 00:07:50.094 portid: 0 00:07:50.094 trsvcid: 4420 00:07:50.094 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:50.094 traddr: 10.0.0.2 00:07:50.094 eflags: none 00:07:50.094 sectype: none 00:07:50.094 =====Discovery Log Entry 3====== 00:07:50.094 trtype: tcp 00:07:50.094 adrfam: ipv4 00:07:50.094 subtype: nvme subsystem 00:07:50.094 treq: not required 00:07:50.094 portid: 0 00:07:50.094 trsvcid: 4420 00:07:50.094 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:50.094 traddr: 10.0.0.2 00:07:50.094 eflags: none 00:07:50.094 sectype: none 00:07:50.094 =====Discovery Log Entry 4====== 00:07:50.094 trtype: tcp 00:07:50.094 adrfam: ipv4 00:07:50.094 subtype: nvme subsystem 00:07:50.094 treq: not required 00:07:50.094 portid: 0 00:07:50.094 trsvcid: 4420 00:07:50.094 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:50.094 traddr: 10.0.0.2 00:07:50.094 eflags: none 00:07:50.094 sectype: none 00:07:50.094 =====Discovery Log Entry 5====== 00:07:50.094 trtype: tcp 00:07:50.094 adrfam: ipv4 00:07:50.094 subtype: discovery subsystem referral 00:07:50.094 treq: not required 00:07:50.094 portid: 0 00:07:50.094 trsvcid: 4430 00:07:50.094 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:50.094 traddr: 10.0.0.2 00:07:50.094 eflags: none 00:07:50.094 sectype: none 00:07:50.094 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:50.094 Perform nvmf subsystem discovery via RPC 00:07:50.094 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:50.094 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.094 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.094 [ 00:07:50.094 { 00:07:50.094 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:50.094 "subtype": "Discovery", 00:07:50.094 "listen_addresses": [ 00:07:50.094 { 00:07:50.094 "trtype": "TCP", 00:07:50.094 "adrfam": "IPv4", 00:07:50.094 "traddr": "10.0.0.2", 00:07:50.094 "trsvcid": "4420" 00:07:50.094 } 00:07:50.094 ], 00:07:50.094 "allow_any_host": true, 00:07:50.094 "hosts": [] 00:07:50.094 }, 00:07:50.094 { 00:07:50.094 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.094 "subtype": "NVMe", 00:07:50.094 "listen_addresses": [ 00:07:50.094 { 00:07:50.094 "trtype": "TCP", 00:07:50.094 "adrfam": "IPv4", 00:07:50.094 "traddr": "10.0.0.2", 00:07:50.094 "trsvcid": "4420" 00:07:50.094 } 00:07:50.094 ], 00:07:50.094 "allow_any_host": true, 00:07:50.094 "hosts": [], 00:07:50.094 "serial_number": "SPDK00000000000001", 00:07:50.094 "model_number": "SPDK bdev Controller", 00:07:50.094 "max_namespaces": 32, 00:07:50.094 "min_cntlid": 1, 00:07:50.094 "max_cntlid": 65519, 00:07:50.094 "namespaces": [ 00:07:50.094 { 00:07:50.094 "nsid": 1, 00:07:50.094 "bdev_name": "Null1", 00:07:50.094 "name": "Null1", 00:07:50.094 "nguid": "D16133908F4942A59948C74BA23CD7A7", 00:07:50.094 "uuid": "d1613390-8f49-42a5-9948-c74ba23cd7a7" 00:07:50.094 } 00:07:50.094 ] 00:07:50.094 }, 00:07:50.094 { 00:07:50.094 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:50.094 "subtype": "NVMe", 00:07:50.094 "listen_addresses": [ 00:07:50.094 { 00:07:50.094 "trtype": "TCP", 00:07:50.094 "adrfam": "IPv4", 00:07:50.094 "traddr": "10.0.0.2", 00:07:50.094 "trsvcid": "4420" 00:07:50.094 } 00:07:50.094 ], 00:07:50.094 "allow_any_host": true, 00:07:50.094 "hosts": [], 00:07:50.094 "serial_number": "SPDK00000000000002", 00:07:50.094 "model_number": "SPDK bdev Controller", 00:07:50.094 "max_namespaces": 32, 00:07:50.094 "min_cntlid": 1, 00:07:50.094 "max_cntlid": 65519, 00:07:50.094 "namespaces": [ 00:07:50.094 { 00:07:50.094 "nsid": 1, 00:07:50.094 "bdev_name": "Null2", 00:07:50.094 "name": "Null2", 00:07:50.094 "nguid": "9BE41843B047404ABC2DB4DA5A620915", 00:07:50.094 "uuid": "9be41843-b047-404a-bc2d-b4da5a620915" 00:07:50.094 } 00:07:50.094 ] 00:07:50.095 }, 00:07:50.095 { 00:07:50.095 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:50.095 "subtype": "NVMe", 00:07:50.095 "listen_addresses": [ 00:07:50.095 { 00:07:50.095 "trtype": "TCP", 00:07:50.095 "adrfam": "IPv4", 00:07:50.095 "traddr": "10.0.0.2", 00:07:50.095 "trsvcid": "4420" 00:07:50.095 } 00:07:50.095 ], 00:07:50.095 "allow_any_host": true, 00:07:50.095 "hosts": [], 00:07:50.095 "serial_number": "SPDK00000000000003", 00:07:50.095 "model_number": "SPDK bdev Controller", 00:07:50.095 "max_namespaces": 32, 00:07:50.095 "min_cntlid": 1, 00:07:50.095 "max_cntlid": 65519, 00:07:50.095 "namespaces": [ 00:07:50.095 { 00:07:50.095 "nsid": 1, 00:07:50.095 "bdev_name": "Null3", 00:07:50.095 "name": "Null3", 00:07:50.095 "nguid": "65EAE2BB446B4655B9B33DE772C80B81", 00:07:50.095 "uuid": "65eae2bb-446b-4655-b9b3-3de772c80b81" 00:07:50.095 } 00:07:50.095 ] 00:07:50.095 }, 00:07:50.095 { 00:07:50.095 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:50.095 "subtype": "NVMe", 00:07:50.095 "listen_addresses": [ 00:07:50.095 { 00:07:50.095 "trtype": "TCP", 00:07:50.095 "adrfam": "IPv4", 00:07:50.095 "traddr": "10.0.0.2", 00:07:50.095 "trsvcid": "4420" 00:07:50.095 } 00:07:50.095 ], 00:07:50.095 "allow_any_host": true, 00:07:50.095 "hosts": [], 00:07:50.095 "serial_number": "SPDK00000000000004", 00:07:50.095 "model_number": "SPDK bdev Controller", 00:07:50.095 "max_namespaces": 32, 00:07:50.095 "min_cntlid": 1, 00:07:50.095 "max_cntlid": 65519, 00:07:50.095 "namespaces": [ 00:07:50.095 { 00:07:50.095 "nsid": 1, 00:07:50.095 "bdev_name": "Null4", 00:07:50.095 "name": "Null4", 00:07:50.095 "nguid": "BB90CB7EB9C34C6FBA873B7C93923CFD", 00:07:50.095 "uuid": "bb90cb7e-b9c3-4c6f-ba87-3b7c93923cfd" 00:07:50.095 } 00:07:50.095 ] 00:07:50.095 } 00:07:50.095 ] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.095 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:50.095 rmmod nvme_tcp 00:07:50.095 rmmod nvme_fabrics 00:07:50.356 rmmod nvme_keyring 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 657262 ']' 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 657262 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 657262 ']' 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 657262 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 657262 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 657262' 00:07:50.356 killing process with pid 657262 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 657262 00:07:50.356 [2024-06-10 10:33:14.472495] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 657262 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.356 10:33:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.904 10:33:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:52.904 00:07:52.904 real 0m11.159s 00:07:52.904 user 0m8.208s 00:07:52.904 sys 0m5.698s 00:07:52.904 10:33:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:52.904 10:33:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.904 ************************************ 00:07:52.904 END TEST nvmf_target_discovery 00:07:52.904 ************************************ 00:07:52.904 10:33:16 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:52.904 10:33:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:52.904 10:33:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:52.904 10:33:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.904 ************************************ 00:07:52.904 START TEST nvmf_referrals 00:07:52.904 ************************************ 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:52.904 * Looking for test storage... 00:07:52.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.904 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.905 10:33:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:01.052 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:01.052 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:01.052 Found net devices under 0000:31:00.0: cvl_0_0 00:08:01.052 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:01.053 Found net devices under 0000:31:00.1: cvl_0_1 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.053 10:33:23 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:01.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:08:01.053 00:08:01.053 --- 10.0.0.2 ping statistics --- 00:08:01.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.053 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:08:01.053 00:08:01.053 --- 10.0.0.1 ping statistics --- 00:08:01.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.053 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=661988 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 661988 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 661988 ']' 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:01.053 10:33:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.053 [2024-06-10 10:33:24.260305] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:08:01.053 [2024-06-10 10:33:24.260366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.053 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.053 [2024-06-10 10:33:24.334080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.053 [2024-06-10 10:33:24.410104] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.053 [2024-06-10 10:33:24.410143] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.053 [2024-06-10 10:33:24.410154] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.053 [2024-06-10 10:33:24.410161] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.053 [2024-06-10 10:33:24.410166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.053 [2024-06-10 10:33:24.410310] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.053 [2024-06-10 10:33:24.410428] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.053 [2024-06-10 10:33:24.410585] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.053 [2024-06-10 10:33:24.410586] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.053 [2024-06-10 10:33:25.087818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.053 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.053 [2024-06-10 10:33:25.103820] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:01.053 [2024-06-10 10:33:25.104019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:01.054 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:01.315 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:01.576 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.837 10:33:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:01.837 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:01.837 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:01.837 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:01.837 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:01.837 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:01.837 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:01.837 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:01.837 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:02.098 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:02.359 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:02.620 rmmod nvme_tcp 00:08:02.620 rmmod nvme_fabrics 00:08:02.620 rmmod nvme_keyring 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 661988 ']' 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 661988 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 661988 ']' 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 661988 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 661988 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 661988' 00:08:02.620 killing process with pid 661988 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 661988 00:08:02.620 [2024-06-10 10:33:26.780949] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:02.620 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 661988 00:08:02.881 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:02.881 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:02.881 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:02.881 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:02.881 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:02.881 10:33:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.881 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:02.881 10:33:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.794 10:33:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.794 00:08:04.794 real 0m12.225s 00:08:04.794 user 0m12.900s 00:08:04.794 sys 0m5.989s 00:08:04.794 10:33:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:04.794 10:33:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.794 ************************************ 00:08:04.794 END TEST nvmf_referrals 00:08:04.794 ************************************ 00:08:04.794 10:33:29 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:04.794 10:33:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:04.794 10:33:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:04.794 10:33:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:04.794 ************************************ 00:08:04.794 START TEST nvmf_connect_disconnect 00:08:04.794 ************************************ 00:08:04.794 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:05.054 * Looking for test storage... 00:08:05.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.054 10:33:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.196 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:13.197 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:13.197 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:13.197 Found net devices under 0000:31:00.0: cvl_0_0 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:13.197 Found net devices under 0000:31:00.1: cvl_0_1 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:08:13.197 00:08:13.197 --- 10.0.0.2 ping statistics --- 00:08:13.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.197 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.409 ms 00:08:13.197 00:08:13.197 --- 10.0.0.1 ping statistics --- 00:08:13.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.197 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=666837 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 666837 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 666837 ']' 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:13.197 10:33:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.198 [2024-06-10 10:33:36.544703] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:08:13.198 [2024-06-10 10:33:36.544761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.198 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.198 [2024-06-10 10:33:36.616956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.198 [2024-06-10 10:33:36.692469] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.198 [2024-06-10 10:33:36.692506] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.198 [2024-06-10 10:33:36.692514] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.198 [2024-06-10 10:33:36.692520] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.198 [2024-06-10 10:33:36.692526] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.198 [2024-06-10 10:33:36.692662] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.198 [2024-06-10 10:33:36.692784] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.198 [2024-06-10 10:33:36.692940] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.198 [2024-06-10 10:33:36.692941] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.198 [2024-06-10 10:33:37.366789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.198 [2024-06-10 10:33:37.426013] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:13.198 [2024-06-10 10:33:37.426237] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:13.198 10:33:37 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:17.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.572 rmmod nvme_tcp 00:08:31.572 rmmod nvme_fabrics 00:08:31.572 rmmod nvme_keyring 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 666837 ']' 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 666837 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 666837 ']' 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 666837 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 666837 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 666837' 00:08:31.572 killing process with pid 666837 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 666837 00:08:31.572 [2024-06-10 10:33:55.585317] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 666837 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.572 10:33:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.118 10:33:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.118 00:08:34.118 real 0m28.744s 00:08:34.118 user 1m17.917s 00:08:34.118 sys 0m6.455s 00:08:34.118 10:33:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:34.118 10:33:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:34.118 ************************************ 00:08:34.118 END TEST nvmf_connect_disconnect 00:08:34.118 ************************************ 00:08:34.118 10:33:57 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:34.118 10:33:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:34.118 10:33:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:34.118 10:33:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.118 ************************************ 00:08:34.118 START TEST nvmf_multitarget 00:08:34.118 ************************************ 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:34.118 * Looking for test storage... 00:08:34.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.118 10:33:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.118 10:33:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:40.712 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:40.712 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:40.712 Found net devices under 0000:31:00.0: cvl_0_0 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.712 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:40.713 Found net devices under 0000:31:00.1: cvl_0_1 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.713 10:34:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.973 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.973 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.973 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.973 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.973 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.973 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.973 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:08:40.973 00:08:40.973 --- 10.0.0.2 ping statistics --- 00:08:40.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.973 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:08:40.973 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:08:41.234 00:08:41.234 --- 10.0.0.1 ping statistics --- 00:08:41.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.234 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=674860 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 674860 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 674860 ']' 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:41.234 10:34:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.235 10:34:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:41.235 10:34:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:41.235 [2024-06-10 10:34:05.365931] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:08:41.235 [2024-06-10 10:34:05.365998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.235 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.235 [2024-06-10 10:34:05.438827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.235 [2024-06-10 10:34:05.514404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.235 [2024-06-10 10:34:05.514443] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.235 [2024-06-10 10:34:05.514450] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.235 [2024-06-10 10:34:05.514456] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.235 [2024-06-10 10:34:05.514463] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.235 [2024-06-10 10:34:05.514544] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.235 [2024-06-10 10:34:05.514677] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.235 [2024-06-10 10:34:05.514833] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.235 [2024-06-10 10:34:05.514834] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:42.176 "nvmf_tgt_1" 00:08:42.176 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:42.176 "nvmf_tgt_2" 00:08:42.438 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:42.438 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:42.438 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:42.438 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:42.438 true 00:08:42.438 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:42.700 true 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:42.700 rmmod nvme_tcp 00:08:42.700 rmmod nvme_fabrics 00:08:42.700 rmmod nvme_keyring 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 674860 ']' 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 674860 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 674860 ']' 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 674860 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:42.700 10:34:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 674860 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 674860' 00:08:42.961 killing process with pid 674860 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 674860 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 674860 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.961 10:34:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.507 10:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.507 00:08:45.507 real 0m11.323s 00:08:45.507 user 0m9.325s 00:08:45.507 sys 0m5.752s 00:08:45.507 10:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:45.507 10:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:45.507 ************************************ 00:08:45.507 END TEST nvmf_multitarget 00:08:45.507 ************************************ 00:08:45.507 10:34:09 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:45.507 10:34:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:45.507 10:34:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:45.507 10:34:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.507 ************************************ 00:08:45.507 START TEST nvmf_rpc 00:08:45.507 ************************************ 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:45.507 * Looking for test storage... 00:08:45.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.507 10:34:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.098 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:52.099 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:52.099 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:52.099 Found net devices under 0000:31:00.0: cvl_0_0 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:52.099 Found net devices under 0000:31:00.1: cvl_0_1 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:52.099 10:34:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:52.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.758 ms 00:08:52.099 00:08:52.099 --- 10.0.0.2 ping statistics --- 00:08:52.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.099 rtt min/avg/max/mdev = 0.758/0.758/0.758/0.000 ms 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:08:52.099 00:08:52.099 --- 10.0.0.1 ping statistics --- 00:08:52.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.099 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=679397 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 679397 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 679397 ']' 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:52.099 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.099 [2024-06-10 10:34:16.139270] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:08:52.099 [2024-06-10 10:34:16.139333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.099 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.099 [2024-06-10 10:34:16.211756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.099 [2024-06-10 10:34:16.286707] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.099 [2024-06-10 10:34:16.286746] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.099 [2024-06-10 10:34:16.286753] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.099 [2024-06-10 10:34:16.286759] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.099 [2024-06-10 10:34:16.286765] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.099 [2024-06-10 10:34:16.286905] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.099 [2024-06-10 10:34:16.287023] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.099 [2024-06-10 10:34:16.287180] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.099 [2024-06-10 10:34:16.287180] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.669 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:52.669 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:08:52.669 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:52.669 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:52.669 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.669 10:34:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.669 10:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:52.669 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:52.669 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.930 10:34:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:52.930 10:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:52.930 "tick_rate": 2400000000, 00:08:52.930 "poll_groups": [ 00:08:52.930 { 00:08:52.930 "name": "nvmf_tgt_poll_group_000", 00:08:52.930 "admin_qpairs": 0, 00:08:52.930 "io_qpairs": 0, 00:08:52.930 "current_admin_qpairs": 0, 00:08:52.930 "current_io_qpairs": 0, 00:08:52.930 "pending_bdev_io": 0, 00:08:52.930 "completed_nvme_io": 0, 00:08:52.930 "transports": [] 00:08:52.930 }, 00:08:52.930 { 00:08:52.930 "name": "nvmf_tgt_poll_group_001", 00:08:52.930 "admin_qpairs": 0, 00:08:52.930 "io_qpairs": 0, 00:08:52.930 "current_admin_qpairs": 0, 00:08:52.930 "current_io_qpairs": 0, 00:08:52.930 "pending_bdev_io": 0, 00:08:52.930 "completed_nvme_io": 0, 00:08:52.930 "transports": [] 00:08:52.930 }, 00:08:52.930 { 00:08:52.930 "name": "nvmf_tgt_poll_group_002", 00:08:52.930 "admin_qpairs": 0, 00:08:52.930 "io_qpairs": 0, 00:08:52.930 "current_admin_qpairs": 0, 00:08:52.930 "current_io_qpairs": 0, 00:08:52.930 "pending_bdev_io": 0, 00:08:52.930 "completed_nvme_io": 0, 00:08:52.930 "transports": [] 00:08:52.930 }, 00:08:52.930 { 00:08:52.930 "name": "nvmf_tgt_poll_group_003", 00:08:52.930 "admin_qpairs": 0, 00:08:52.930 "io_qpairs": 0, 00:08:52.930 "current_admin_qpairs": 0, 00:08:52.930 "current_io_qpairs": 0, 00:08:52.930 "pending_bdev_io": 0, 00:08:52.930 "completed_nvme_io": 0, 00:08:52.930 "transports": [] 00:08:52.930 } 00:08:52.930 ] 00:08:52.930 }' 00:08:52.930 10:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:52.930 10:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:52.930 10:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:52.930 10:34:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.930 [2024-06-10 10:34:17.069164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:52.930 "tick_rate": 2400000000, 00:08:52.930 "poll_groups": [ 00:08:52.930 { 00:08:52.930 "name": "nvmf_tgt_poll_group_000", 00:08:52.930 "admin_qpairs": 0, 00:08:52.930 "io_qpairs": 0, 00:08:52.930 "current_admin_qpairs": 0, 00:08:52.930 "current_io_qpairs": 0, 00:08:52.930 "pending_bdev_io": 0, 00:08:52.930 "completed_nvme_io": 0, 00:08:52.930 "transports": [ 00:08:52.930 { 00:08:52.930 "trtype": "TCP" 00:08:52.930 } 00:08:52.930 ] 00:08:52.930 }, 00:08:52.930 { 00:08:52.930 "name": "nvmf_tgt_poll_group_001", 00:08:52.930 "admin_qpairs": 0, 00:08:52.930 "io_qpairs": 0, 00:08:52.930 "current_admin_qpairs": 0, 00:08:52.930 "current_io_qpairs": 0, 00:08:52.930 "pending_bdev_io": 0, 00:08:52.930 "completed_nvme_io": 0, 00:08:52.930 "transports": [ 00:08:52.930 { 00:08:52.930 "trtype": "TCP" 00:08:52.930 } 00:08:52.930 ] 00:08:52.930 }, 00:08:52.930 { 00:08:52.930 "name": "nvmf_tgt_poll_group_002", 00:08:52.930 "admin_qpairs": 0, 00:08:52.930 "io_qpairs": 0, 00:08:52.930 "current_admin_qpairs": 0, 00:08:52.930 "current_io_qpairs": 0, 00:08:52.930 "pending_bdev_io": 0, 00:08:52.930 "completed_nvme_io": 0, 00:08:52.930 "transports": [ 00:08:52.930 { 00:08:52.930 "trtype": "TCP" 00:08:52.930 } 00:08:52.930 ] 00:08:52.930 }, 00:08:52.930 { 00:08:52.930 "name": "nvmf_tgt_poll_group_003", 00:08:52.930 "admin_qpairs": 0, 00:08:52.930 "io_qpairs": 0, 00:08:52.930 "current_admin_qpairs": 0, 00:08:52.930 "current_io_qpairs": 0, 00:08:52.930 "pending_bdev_io": 0, 00:08:52.930 "completed_nvme_io": 0, 00:08:52.930 "transports": [ 00:08:52.930 { 00:08:52.930 "trtype": "TCP" 00:08:52.930 } 00:08:52.930 ] 00:08:52.930 } 00:08:52.930 ] 00:08:52.930 }' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.930 Malloc1 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:52.930 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.191 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.191 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:53.191 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.191 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.192 [2024-06-10 10:34:17.252728] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:53.192 [2024-06-10 10:34:17.252950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:08:53.192 [2024-06-10 10:34:17.279784] ctrlr.c: 817:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:08:53.192 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:53.192 could not add new controller: failed to write to nvme-fabrics device 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:53.192 10:34:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.578 10:34:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.578 10:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:54.578 10:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.578 10:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:54.578 10:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:08:56.490 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:08:56.490 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:56.490 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.490 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:08:56.490 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.490 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:08:56.490 10:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:56.751 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.752 [2024-06-10 10:34:20.966274] ctrlr.c: 817:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:08:56.752 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:56.752 could not add new controller: failed to write to nvme-fabrics device 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:56.752 10:34:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.662 10:34:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.662 10:34:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:08:58.662 10:34:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.662 10:34:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:08:58.662 10:34:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.578 [2024-06-10 10:34:24.653196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:00.578 10:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.963 10:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.963 10:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:01.963 10:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.963 10:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:01.963 10:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:04.509 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:04.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.510 [2024-06-10 10:34:28.361871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:04.510 10:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.896 10:34:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.896 10:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:05.896 10:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.896 10:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:05.896 10:34:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:07.887 10:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:07.887 10:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:07.887 10:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.887 10:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:07.887 10:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.887 10:34:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:07.887 10:34:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.887 [2024-06-10 10:34:32.080823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:07.887 10:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.800 10:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.800 10:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:09.800 10:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.800 10:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:09.800 10:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.732 [2024-06-10 10:34:35.787580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:11.732 10:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.117 10:34:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.117 10:34:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:13.117 10:34:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.117 10:34:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:13.117 10:34:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:15.665 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:15.665 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:15.665 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.665 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:15.665 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 [2024-06-10 10:34:39.594578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:15.666 10:34:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.050 10:34:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.050 10:34:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:09:17.050 10:34:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.050 10:34:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:09:17.050 10:34:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:09:18.963 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 [2024-06-10 10:34:43.303541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 [2024-06-10 10:34:43.363689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 [2024-06-10 10:34:43.423864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 [2024-06-10 10:34:43.480050] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.231 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 [2024-06-10 10:34:43.540236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:19.492 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:19.492 "tick_rate": 2400000000, 00:09:19.492 "poll_groups": [ 00:09:19.492 { 00:09:19.492 "name": "nvmf_tgt_poll_group_000", 00:09:19.492 "admin_qpairs": 0, 00:09:19.492 "io_qpairs": 224, 00:09:19.492 "current_admin_qpairs": 0, 00:09:19.492 "current_io_qpairs": 0, 00:09:19.492 "pending_bdev_io": 0, 00:09:19.492 "completed_nvme_io": 276, 00:09:19.492 "transports": [ 00:09:19.492 { 00:09:19.492 "trtype": "TCP" 00:09:19.492 } 00:09:19.492 ] 00:09:19.492 }, 00:09:19.492 { 00:09:19.492 "name": "nvmf_tgt_poll_group_001", 00:09:19.492 "admin_qpairs": 1, 00:09:19.492 "io_qpairs": 223, 00:09:19.492 "current_admin_qpairs": 0, 00:09:19.492 "current_io_qpairs": 0, 00:09:19.492 "pending_bdev_io": 0, 00:09:19.492 "completed_nvme_io": 273, 00:09:19.492 "transports": [ 00:09:19.492 { 00:09:19.492 "trtype": "TCP" 00:09:19.492 } 00:09:19.492 ] 00:09:19.492 }, 00:09:19.492 { 00:09:19.492 "name": "nvmf_tgt_poll_group_002", 00:09:19.492 "admin_qpairs": 6, 00:09:19.492 "io_qpairs": 218, 00:09:19.492 "current_admin_qpairs": 0, 00:09:19.492 "current_io_qpairs": 0, 00:09:19.492 "pending_bdev_io": 0, 00:09:19.492 "completed_nvme_io": 466, 00:09:19.492 "transports": [ 00:09:19.492 { 00:09:19.492 "trtype": "TCP" 00:09:19.492 } 00:09:19.492 ] 00:09:19.492 }, 00:09:19.492 { 00:09:19.492 "name": "nvmf_tgt_poll_group_003", 00:09:19.492 "admin_qpairs": 0, 00:09:19.492 "io_qpairs": 224, 00:09:19.492 "current_admin_qpairs": 0, 00:09:19.492 "current_io_qpairs": 0, 00:09:19.492 "pending_bdev_io": 0, 00:09:19.492 "completed_nvme_io": 224, 00:09:19.492 "transports": [ 00:09:19.492 { 00:09:19.493 "trtype": "TCP" 00:09:19.493 } 00:09:19.493 ] 00:09:19.493 } 00:09:19.493 ] 00:09:19.493 }' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.493 rmmod nvme_tcp 00:09:19.493 rmmod nvme_fabrics 00:09:19.493 rmmod nvme_keyring 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 679397 ']' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 679397 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 679397 ']' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 679397 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:19.493 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 679397 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 679397' 00:09:19.753 killing process with pid 679397 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 679397 00:09:19.753 [2024-06-10 10:34:43.820952] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 679397 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.753 10:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.296 10:34:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.296 00:09:22.296 real 0m36.756s 00:09:22.296 user 1m52.250s 00:09:22.296 sys 0m6.725s 00:09:22.296 10:34:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:22.296 10:34:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.296 ************************************ 00:09:22.296 END TEST nvmf_rpc 00:09:22.296 ************************************ 00:09:22.296 10:34:46 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:22.296 10:34:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:22.296 10:34:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:22.296 10:34:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.296 ************************************ 00:09:22.296 START TEST nvmf_invalid 00:09:22.296 ************************************ 00:09:22.296 10:34:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:22.296 * Looking for test storage... 00:09:22.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.296 10:34:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.296 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.297 10:34:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:28.890 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:28.890 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:28.890 Found net devices under 0000:31:00.0: cvl_0_0 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:28.890 Found net devices under 0000:31:00.1: cvl_0_1 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.890 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:29.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:09:29.152 00:09:29.152 --- 10.0.0.2 ping statistics --- 00:09:29.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.152 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:09:29.152 00:09:29.152 --- 10.0.0.1 ping statistics --- 00:09:29.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.152 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=689083 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 689083 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 689083 ']' 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:29.152 10:34:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:29.152 [2024-06-10 10:34:53.410860] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:09:29.152 [2024-06-10 10:34:53.410908] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.414 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.414 [2024-06-10 10:34:53.476768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.414 [2024-06-10 10:34:53.542039] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.414 [2024-06-10 10:34:53.542074] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.414 [2024-06-10 10:34:53.542081] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.414 [2024-06-10 10:34:53.542087] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.414 [2024-06-10 10:34:53.542093] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.414 [2024-06-10 10:34:53.542232] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.414 [2024-06-10 10:34:53.542449] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.414 [2024-06-10 10:34:53.542450] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.414 [2024-06-10 10:34:53.542342] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.985 10:34:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:29.985 10:34:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:09:29.985 10:34:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.985 10:34:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:29.985 10:34:54 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:29.985 10:34:54 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.985 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:29.985 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22769 00:09:30.246 [2024-06-10 10:34:54.356151] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:30.246 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:30.246 { 00:09:30.246 "nqn": "nqn.2016-06.io.spdk:cnode22769", 00:09:30.246 "tgt_name": "foobar", 00:09:30.246 "method": "nvmf_create_subsystem", 00:09:30.246 "req_id": 1 00:09:30.246 } 00:09:30.246 Got JSON-RPC error response 00:09:30.246 response: 00:09:30.246 { 00:09:30.246 "code": -32603, 00:09:30.246 "message": "Unable to find target foobar" 00:09:30.246 }' 00:09:30.246 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:30.246 { 00:09:30.246 "nqn": "nqn.2016-06.io.spdk:cnode22769", 00:09:30.246 "tgt_name": "foobar", 00:09:30.246 "method": "nvmf_create_subsystem", 00:09:30.246 "req_id": 1 00:09:30.246 } 00:09:30.246 Got JSON-RPC error response 00:09:30.246 response: 00:09:30.246 { 00:09:30.246 "code": -32603, 00:09:30.246 "message": "Unable to find target foobar" 00:09:30.246 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:30.246 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:30.246 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19382 00:09:30.246 [2024-06-10 10:34:54.528713] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19382: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:30.508 { 00:09:30.508 "nqn": "nqn.2016-06.io.spdk:cnode19382", 00:09:30.508 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:30.508 "method": "nvmf_create_subsystem", 00:09:30.508 "req_id": 1 00:09:30.508 } 00:09:30.508 Got JSON-RPC error response 00:09:30.508 response: 00:09:30.508 { 00:09:30.508 "code": -32602, 00:09:30.508 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:30.508 }' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:30.508 { 00:09:30.508 "nqn": "nqn.2016-06.io.spdk:cnode19382", 00:09:30.508 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:30.508 "method": "nvmf_create_subsystem", 00:09:30.508 "req_id": 1 00:09:30.508 } 00:09:30.508 Got JSON-RPC error response 00:09:30.508 response: 00:09:30.508 { 00:09:30.508 "code": -32602, 00:09:30.508 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:30.508 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18218 00:09:30.508 [2024-06-10 10:34:54.705287] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18218: invalid model number 'SPDK_Controller' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:30.508 { 00:09:30.508 "nqn": "nqn.2016-06.io.spdk:cnode18218", 00:09:30.508 "model_number": "SPDK_Controller\u001f", 00:09:30.508 "method": "nvmf_create_subsystem", 00:09:30.508 "req_id": 1 00:09:30.508 } 00:09:30.508 Got JSON-RPC error response 00:09:30.508 response: 00:09:30.508 { 00:09:30.508 "code": -32602, 00:09:30.508 "message": "Invalid MN SPDK_Controller\u001f" 00:09:30.508 }' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:30.508 { 00:09:30.508 "nqn": "nqn.2016-06.io.spdk:cnode18218", 00:09:30.508 "model_number": "SPDK_Controller\u001f", 00:09:30.508 "method": "nvmf_create_subsystem", 00:09:30.508 "req_id": 1 00:09:30.508 } 00:09:30.508 Got JSON-RPC error response 00:09:30.508 response: 00:09:30.508 { 00:09:30.508 "code": -32602, 00:09:30.508 "message": "Invalid MN SPDK_Controller\u001f" 00:09:30.508 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:30.508 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:30.509 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:30.509 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.509 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.509 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:30.509 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:30.509 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:30.509 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.509 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.509 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:30.770 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ L == \- ]] 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'L6/-$o#i9b~V\#^vCWeH3' 00:09:30.771 10:34:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'L6/-$o#i9b~V\#^vCWeH3' nqn.2016-06.io.spdk:cnode16451 00:09:30.771 [2024-06-10 10:34:55.034318] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16451: invalid serial number 'L6/-$o#i9b~V\#^vCWeH3' 00:09:31.032 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:31.032 { 00:09:31.032 "nqn": "nqn.2016-06.io.spdk:cnode16451", 00:09:31.032 "serial_number": "L6/-$o#i9b~V\\#^vCWeH3", 00:09:31.033 "method": "nvmf_create_subsystem", 00:09:31.033 "req_id": 1 00:09:31.033 } 00:09:31.033 Got JSON-RPC error response 00:09:31.033 response: 00:09:31.033 { 00:09:31.033 "code": -32602, 00:09:31.033 "message": "Invalid SN L6/-$o#i9b~V\\#^vCWeH3" 00:09:31.033 }' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:31.033 { 00:09:31.033 "nqn": "nqn.2016-06.io.spdk:cnode16451", 00:09:31.033 "serial_number": "L6/-$o#i9b~V\\#^vCWeH3", 00:09:31.033 "method": "nvmf_create_subsystem", 00:09:31.033 "req_id": 1 00:09:31.033 } 00:09:31.033 Got JSON-RPC error response 00:09:31.033 response: 00:09:31.033 { 00:09:31.033 "code": -32602, 00:09:31.033 "message": "Invalid SN L6/-$o#i9b~V\\#^vCWeH3" 00:09:31.033 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.033 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.034 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:31.295 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '4YPGQiu_(\Y!@kHKkA!eJr|(iVqy[Cl'\''Z)m;KA1+3' 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '4YPGQiu_(\Y!@kHKkA!eJr|(iVqy[Cl'\''Z)m;KA1+3' nqn.2016-06.io.spdk:cnode10137 00:09:31.296 [2024-06-10 10:34:55.511832] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10137: invalid model number '4YPGQiu_(\Y!@kHKkA!eJr|(iVqy[Cl'Z)m;KA1+3' 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:31.296 { 00:09:31.296 "nqn": "nqn.2016-06.io.spdk:cnode10137", 00:09:31.296 "model_number": "4YPGQiu_(\\Y!@kHKkA!eJr|(iVqy[Cl'\''Z)m;KA1+3", 00:09:31.296 "method": "nvmf_create_subsystem", 00:09:31.296 "req_id": 1 00:09:31.296 } 00:09:31.296 Got JSON-RPC error response 00:09:31.296 response: 00:09:31.296 { 00:09:31.296 "code": -32602, 00:09:31.296 "message": "Invalid MN 4YPGQiu_(\\Y!@kHKkA!eJr|(iVqy[Cl'\''Z)m;KA1+3" 00:09:31.296 }' 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:31.296 { 00:09:31.296 "nqn": "nqn.2016-06.io.spdk:cnode10137", 00:09:31.296 "model_number": "4YPGQiu_(\\Y!@kHKkA!eJr|(iVqy[Cl'Z)m;KA1+3", 00:09:31.296 "method": "nvmf_create_subsystem", 00:09:31.296 "req_id": 1 00:09:31.296 } 00:09:31.296 Got JSON-RPC error response 00:09:31.296 response: 00:09:31.296 { 00:09:31.296 "code": -32602, 00:09:31.296 "message": "Invalid MN 4YPGQiu_(\\Y!@kHKkA!eJr|(iVqy[Cl'Z)m;KA1+3" 00:09:31.296 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:31.296 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:31.557 [2024-06-10 10:34:55.680437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.557 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:31.819 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:31.819 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:31.819 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:31.819 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:31.819 10:34:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:31.819 [2024-06-10 10:34:56.037517] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:31.819 [2024-06-10 10:34:56.037575] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:31.819 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:31.819 { 00:09:31.819 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:31.819 "listen_address": { 00:09:31.819 "trtype": "tcp", 00:09:31.819 "traddr": "", 00:09:31.819 "trsvcid": "4421" 00:09:31.819 }, 00:09:31.819 "method": "nvmf_subsystem_remove_listener", 00:09:31.819 "req_id": 1 00:09:31.819 } 00:09:31.819 Got JSON-RPC error response 00:09:31.819 response: 00:09:31.819 { 00:09:31.819 "code": -32602, 00:09:31.819 "message": "Invalid parameters" 00:09:31.819 }' 00:09:31.819 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:31.819 { 00:09:31.819 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:31.819 "listen_address": { 00:09:31.819 "trtype": "tcp", 00:09:31.819 "traddr": "", 00:09:31.819 "trsvcid": "4421" 00:09:31.819 }, 00:09:31.819 "method": "nvmf_subsystem_remove_listener", 00:09:31.819 "req_id": 1 00:09:31.819 } 00:09:31.819 Got JSON-RPC error response 00:09:31.819 response: 00:09:31.819 { 00:09:31.819 "code": -32602, 00:09:31.819 "message": "Invalid parameters" 00:09:31.819 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:31.819 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16772 -i 0 00:09:32.081 [2024-06-10 10:34:56.202052] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16772: invalid cntlid range [0-65519] 00:09:32.081 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:32.081 { 00:09:32.081 "nqn": "nqn.2016-06.io.spdk:cnode16772", 00:09:32.081 "min_cntlid": 0, 00:09:32.081 "method": "nvmf_create_subsystem", 00:09:32.081 "req_id": 1 00:09:32.081 } 00:09:32.081 Got JSON-RPC error response 00:09:32.081 response: 00:09:32.081 { 00:09:32.081 "code": -32602, 00:09:32.081 "message": "Invalid cntlid range [0-65519]" 00:09:32.081 }' 00:09:32.081 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:32.081 { 00:09:32.081 "nqn": "nqn.2016-06.io.spdk:cnode16772", 00:09:32.081 "min_cntlid": 0, 00:09:32.081 "method": "nvmf_create_subsystem", 00:09:32.081 "req_id": 1 00:09:32.081 } 00:09:32.081 Got JSON-RPC error response 00:09:32.081 response: 00:09:32.081 { 00:09:32.081 "code": -32602, 00:09:32.081 "message": "Invalid cntlid range [0-65519]" 00:09:32.081 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:32.081 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2748 -i 65520 00:09:32.081 [2024-06-10 10:34:56.366583] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2748: invalid cntlid range [65520-65519] 00:09:32.342 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:32.342 { 00:09:32.342 "nqn": "nqn.2016-06.io.spdk:cnode2748", 00:09:32.342 "min_cntlid": 65520, 00:09:32.342 "method": "nvmf_create_subsystem", 00:09:32.342 "req_id": 1 00:09:32.343 } 00:09:32.343 Got JSON-RPC error response 00:09:32.343 response: 00:09:32.343 { 00:09:32.343 "code": -32602, 00:09:32.343 "message": "Invalid cntlid range [65520-65519]" 00:09:32.343 }' 00:09:32.343 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:32.343 { 00:09:32.343 "nqn": "nqn.2016-06.io.spdk:cnode2748", 00:09:32.343 "min_cntlid": 65520, 00:09:32.343 "method": "nvmf_create_subsystem", 00:09:32.343 "req_id": 1 00:09:32.343 } 00:09:32.343 Got JSON-RPC error response 00:09:32.343 response: 00:09:32.343 { 00:09:32.343 "code": -32602, 00:09:32.343 "message": "Invalid cntlid range [65520-65519]" 00:09:32.343 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:32.343 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24491 -I 0 00:09:32.343 [2024-06-10 10:34:56.539125] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24491: invalid cntlid range [1-0] 00:09:32.343 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:32.343 { 00:09:32.343 "nqn": "nqn.2016-06.io.spdk:cnode24491", 00:09:32.343 "max_cntlid": 0, 00:09:32.343 "method": "nvmf_create_subsystem", 00:09:32.343 "req_id": 1 00:09:32.343 } 00:09:32.343 Got JSON-RPC error response 00:09:32.343 response: 00:09:32.343 { 00:09:32.343 "code": -32602, 00:09:32.343 "message": "Invalid cntlid range [1-0]" 00:09:32.343 }' 00:09:32.343 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:32.343 { 00:09:32.343 "nqn": "nqn.2016-06.io.spdk:cnode24491", 00:09:32.343 "max_cntlid": 0, 00:09:32.343 "method": "nvmf_create_subsystem", 00:09:32.343 "req_id": 1 00:09:32.343 } 00:09:32.343 Got JSON-RPC error response 00:09:32.343 response: 00:09:32.343 { 00:09:32.343 "code": -32602, 00:09:32.343 "message": "Invalid cntlid range [1-0]" 00:09:32.343 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:32.343 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2733 -I 65520 00:09:32.605 [2024-06-10 10:34:56.711655] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2733: invalid cntlid range [1-65520] 00:09:32.605 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:32.605 { 00:09:32.605 "nqn": "nqn.2016-06.io.spdk:cnode2733", 00:09:32.605 "max_cntlid": 65520, 00:09:32.605 "method": "nvmf_create_subsystem", 00:09:32.605 "req_id": 1 00:09:32.605 } 00:09:32.605 Got JSON-RPC error response 00:09:32.605 response: 00:09:32.605 { 00:09:32.605 "code": -32602, 00:09:32.605 "message": "Invalid cntlid range [1-65520]" 00:09:32.605 }' 00:09:32.605 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:32.605 { 00:09:32.605 "nqn": "nqn.2016-06.io.spdk:cnode2733", 00:09:32.605 "max_cntlid": 65520, 00:09:32.605 "method": "nvmf_create_subsystem", 00:09:32.605 "req_id": 1 00:09:32.605 } 00:09:32.605 Got JSON-RPC error response 00:09:32.605 response: 00:09:32.605 { 00:09:32.605 "code": -32602, 00:09:32.605 "message": "Invalid cntlid range [1-65520]" 00:09:32.605 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:32.605 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27420 -i 6 -I 5 00:09:32.605 [2024-06-10 10:34:56.876166] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27420: invalid cntlid range [6-5] 00:09:32.866 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:32.866 { 00:09:32.866 "nqn": "nqn.2016-06.io.spdk:cnode27420", 00:09:32.866 "min_cntlid": 6, 00:09:32.867 "max_cntlid": 5, 00:09:32.867 "method": "nvmf_create_subsystem", 00:09:32.867 "req_id": 1 00:09:32.867 } 00:09:32.867 Got JSON-RPC error response 00:09:32.867 response: 00:09:32.867 { 00:09:32.867 "code": -32602, 00:09:32.867 "message": "Invalid cntlid range [6-5]" 00:09:32.867 }' 00:09:32.867 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:32.867 { 00:09:32.867 "nqn": "nqn.2016-06.io.spdk:cnode27420", 00:09:32.867 "min_cntlid": 6, 00:09:32.867 "max_cntlid": 5, 00:09:32.867 "method": "nvmf_create_subsystem", 00:09:32.867 "req_id": 1 00:09:32.867 } 00:09:32.867 Got JSON-RPC error response 00:09:32.867 response: 00:09:32.867 { 00:09:32.867 "code": -32602, 00:09:32.867 "message": "Invalid cntlid range [6-5]" 00:09:32.867 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:32.867 10:34:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:32.867 { 00:09:32.867 "name": "foobar", 00:09:32.867 "method": "nvmf_delete_target", 00:09:32.867 "req_id": 1 00:09:32.867 } 00:09:32.867 Got JSON-RPC error response 00:09:32.867 response: 00:09:32.867 { 00:09:32.867 "code": -32602, 00:09:32.867 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:32.867 }' 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:32.867 { 00:09:32.867 "name": "foobar", 00:09:32.867 "method": "nvmf_delete_target", 00:09:32.867 "req_id": 1 00:09:32.867 } 00:09:32.867 Got JSON-RPC error response 00:09:32.867 response: 00:09:32.867 { 00:09:32.867 "code": -32602, 00:09:32.867 "message": "The specified target doesn't exist, cannot delete it." 00:09:32.867 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.867 rmmod nvme_tcp 00:09:32.867 rmmod nvme_fabrics 00:09:32.867 rmmod nvme_keyring 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 689083 ']' 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 689083 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 689083 ']' 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 689083 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 689083 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 689083' 00:09:32.867 killing process with pid 689083 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 689083 00:09:32.867 [2024-06-10 10:34:57.127280] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:32.867 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 689083 00:09:33.129 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.129 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.129 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.129 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.129 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.129 10:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.129 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.129 10:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.043 10:34:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.305 00:09:35.305 real 0m13.218s 00:09:35.305 user 0m18.970s 00:09:35.305 sys 0m6.166s 00:09:35.305 10:34:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:35.305 10:34:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:35.305 ************************************ 00:09:35.305 END TEST nvmf_invalid 00:09:35.305 ************************************ 00:09:35.305 10:34:59 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:35.305 10:34:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:35.305 10:34:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:35.305 10:34:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.305 ************************************ 00:09:35.305 START TEST nvmf_abort 00:09:35.305 ************************************ 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:35.305 * Looking for test storage... 00:09:35.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.305 10:34:59 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.306 10:34:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:43.451 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:43.451 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.451 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:43.452 Found net devices under 0000:31:00.0: cvl_0_0 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:43.452 Found net devices under 0000:31:00.1: cvl_0_1 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:09:43.452 00:09:43.452 --- 10.0.0.2 ping statistics --- 00:09:43.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.452 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.525 ms 00:09:43.452 00:09:43.452 --- 10.0.0.1 ping statistics --- 00:09:43.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.452 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=694316 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 694316 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 694316 ']' 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:43.452 10:35:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.452 [2024-06-10 10:35:06.767573] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:09:43.452 [2024-06-10 10:35:06.767637] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.452 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.452 [2024-06-10 10:35:06.855804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.452 [2024-06-10 10:35:06.949646] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.453 [2024-06-10 10:35:06.949703] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.453 [2024-06-10 10:35:06.949711] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.453 [2024-06-10 10:35:06.949718] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.453 [2024-06-10 10:35:06.949724] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.453 [2024-06-10 10:35:06.949857] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.453 [2024-06-10 10:35:06.950020] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.453 [2024-06-10 10:35:06.950020] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.453 [2024-06-10 10:35:07.574940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.453 Malloc0 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.453 Delay0 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.453 [2024-06-10 10:35:07.654260] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:43.453 [2024-06-10 10:35:07.654481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:43.453 10:35:07 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:43.453 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.714 [2024-06-10 10:35:07.816318] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:46.258 Initializing NVMe Controllers 00:09:46.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:46.258 controller IO queue size 128 less than required 00:09:46.258 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:46.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:46.258 Initialization complete. Launching workers. 00:09:46.258 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32103 00:09:46.258 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32164, failed to submit 62 00:09:46.258 success 32107, unsuccess 57, failed 0 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.258 rmmod nvme_tcp 00:09:46.258 rmmod nvme_fabrics 00:09:46.258 rmmod nvme_keyring 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 694316 ']' 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 694316 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 694316 ']' 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 694316 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 694316 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 694316' 00:09:46.258 killing process with pid 694316 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 694316 00:09:46.258 [2024-06-10 10:35:10.193159] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 694316 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.258 10:35:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.173 10:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:48.173 00:09:48.173 real 0m12.998s 00:09:48.173 user 0m14.063s 00:09:48.173 sys 0m6.245s 00:09:48.173 10:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:48.173 10:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:48.173 ************************************ 00:09:48.173 END TEST nvmf_abort 00:09:48.173 ************************************ 00:09:48.173 10:35:12 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:48.173 10:35:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:09:48.173 10:35:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:48.173 10:35:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:48.434 ************************************ 00:09:48.434 START TEST nvmf_ns_hotplug_stress 00:09:48.434 ************************************ 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:48.434 * Looking for test storage... 00:09:48.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.434 10:35:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:56.603 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:56.603 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:56.603 Found net devices under 0000:31:00.0: cvl_0_0 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:56.603 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:56.604 Found net devices under 0000:31:00.1: cvl_0_1 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:56.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:09:56.604 00:09:56.604 --- 10.0.0.2 ping statistics --- 00:09:56.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.604 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:56.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:09:56.604 00:09:56.604 --- 10.0.0.1 ping statistics --- 00:09:56.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.604 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=699279 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 699279 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 699279 ']' 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:56.604 10:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.604 [2024-06-10 10:35:19.890667] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:09:56.604 [2024-06-10 10:35:19.890722] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.604 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.604 [2024-06-10 10:35:19.977504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.604 [2024-06-10 10:35:20.080295] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.604 [2024-06-10 10:35:20.080360] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.604 [2024-06-10 10:35:20.080368] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.604 [2024-06-10 10:35:20.080375] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.604 [2024-06-10 10:35:20.080381] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.604 [2024-06-10 10:35:20.080543] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.604 [2024-06-10 10:35:20.080812] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.604 [2024-06-10 10:35:20.080812] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.604 10:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:56.604 10:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:09:56.604 10:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.604 10:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:56.604 10:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.604 10:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.604 10:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:56.604 10:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:56.604 [2024-06-10 10:35:20.851325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.902 10:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:56.902 10:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.174 [2024-06-10 10:35:21.188545] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:57.174 [2024-06-10 10:35:21.188813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.174 10:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:57.174 10:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:57.436 Malloc0 00:09:57.436 10:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:57.436 Delay0 00:09:57.697 10:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.697 10:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:57.958 NULL1 00:09:57.958 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:57.958 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=699771 00:09:57.958 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:57.958 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:09:57.958 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.218 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.218 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.479 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:58.479 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:58.479 [2024-06-10 10:35:22.716835] bdev.c:5000:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:09:58.479 true 00:09:58.479 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:09:58.479 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.741 10:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.001 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:59.001 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:59.001 true 00:09:59.001 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:09:59.001 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.262 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.523 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:59.523 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:59.523 true 00:09:59.523 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:09:59.523 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.784 10:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.044 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:00.044 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:00.044 true 00:10:00.044 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:00.044 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.304 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.564 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:00.564 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:00.564 true 00:10:00.564 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:00.564 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.824 10:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.085 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:01.085 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:01.085 true 00:10:01.085 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:01.085 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.345 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.606 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:01.606 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:01.606 true 00:10:01.606 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:01.606 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.867 10:35:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.867 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:01.867 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:02.148 true 00:10:02.148 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:02.148 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.409 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.409 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:02.409 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:02.669 true 00:10:02.669 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:02.669 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.930 10:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.930 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:02.930 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:03.191 true 00:10:03.191 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:03.191 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.451 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.451 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:03.451 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:03.712 true 00:10:03.712 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:03.712 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.712 10:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.973 10:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:03.973 10:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:04.233 true 00:10:04.233 10:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:04.233 10:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.233 10:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.494 10:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:04.494 10:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:04.754 true 00:10:04.754 10:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:04.754 10:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.754 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.015 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:05.015 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:05.275 true 00:10:05.275 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:05.275 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.275 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.536 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:05.536 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:05.797 true 00:10:05.797 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:05.797 10:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.797 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.058 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:06.058 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:06.058 true 00:10:06.318 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:06.318 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.318 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.579 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:06.579 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:06.579 true 00:10:06.839 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:06.839 10:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.839 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.100 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:07.100 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:07.100 true 00:10:07.100 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:07.100 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.361 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.622 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:07.622 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:07.622 true 00:10:07.622 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:07.622 10:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.882 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.143 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:08.143 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:08.143 true 00:10:08.143 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:08.143 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.404 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.664 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:08.664 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:08.664 true 00:10:08.664 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:08.664 10:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.925 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.186 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:09.186 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:09.186 true 00:10:09.186 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:09.186 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.446 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.707 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:09.707 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:09.707 true 00:10:09.707 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:09.707 10:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.967 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.228 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:10.228 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:10.228 true 00:10:10.228 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:10.228 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.490 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.490 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:10.490 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:10.751 true 00:10:10.751 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:10.751 10:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.012 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.012 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:11.012 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:11.273 true 00:10:11.273 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:11.273 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.534 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.534 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:11.534 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:11.795 true 00:10:11.795 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:11.795 10:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.055 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.055 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:12.055 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:12.317 true 00:10:12.317 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:12.317 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.317 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.578 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:12.578 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:12.838 true 00:10:12.838 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:12.838 10:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.838 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.100 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:13.100 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:13.361 true 00:10:13.361 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:13.361 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.361 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.622 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:13.622 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:13.622 true 00:10:13.882 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:13.882 10:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.882 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.143 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:14.143 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:14.143 true 00:10:14.405 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:14.405 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.405 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.665 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:14.665 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:14.665 true 00:10:14.665 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:14.665 10:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.926 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.187 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:15.187 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:15.187 true 00:10:15.187 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:15.187 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.448 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.710 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:15.710 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:15.710 true 00:10:15.710 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:15.710 10:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.971 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.233 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:16.233 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:16.233 true 00:10:16.233 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:16.233 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.495 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.755 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:16.755 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:16.755 true 00:10:16.755 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:16.755 10:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.016 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.276 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:17.276 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:17.276 true 00:10:17.276 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:17.276 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.537 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.537 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:17.537 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:17.798 true 00:10:17.798 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:17.798 10:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.059 10:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.059 10:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:18.059 10:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:18.320 true 00:10:18.320 10:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:18.320 10:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.580 10:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.580 10:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:18.580 10:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:18.840 true 00:10:18.840 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:18.840 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.100 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.100 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:19.100 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:19.360 true 00:10:19.360 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:19.360 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.621 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.621 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:19.621 10:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:19.882 true 00:10:19.882 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:19.882 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.143 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.143 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:20.143 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:20.404 true 00:10:20.404 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:20.404 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.404 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.664 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:20.664 10:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:20.925 true 00:10:20.925 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:20.925 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.925 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.187 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:21.187 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:21.448 true 00:10:21.448 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:21.448 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.448 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.708 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:21.708 10:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:21.969 true 00:10:21.969 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:21.969 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.969 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.229 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:22.229 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:22.491 true 00:10:22.491 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:22.491 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.491 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.752 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:22.752 10:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:22.752 true 00:10:23.014 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:23.014 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.014 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.275 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:23.275 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:23.275 true 00:10:23.275 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:23.275 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.537 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.798 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:23.798 10:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:23.798 true 00:10:23.798 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:23.798 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.059 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.370 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:24.370 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:24.370 true 00:10:24.370 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:24.370 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.681 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.681 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:24.681 10:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:24.941 true 00:10:24.941 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:24.941 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.202 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.202 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:25.202 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:25.463 true 00:10:25.463 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:25.463 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.463 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.724 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:25.724 10:35:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:25.985 true 00:10:25.985 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:25.985 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.985 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.245 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:26.245 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:26.505 true 00:10:26.505 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:26.505 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.505 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.766 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:10:26.766 10:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:27.026 true 00:10:27.026 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:27.026 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.026 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.287 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:10:27.287 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:27.287 true 00:10:27.548 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:27.548 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.548 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.809 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:10:27.809 10:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:27.809 true 00:10:27.809 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:27.809 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.076 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.340 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:10:28.340 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:10:28.340 Initializing NVMe Controllers 00:10:28.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:28.340 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:10:28.340 Controller IO queue size 128, less than required. 00:10:28.340 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:28.340 WARNING: Some requested NVMe devices were skipped 00:10:28.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:28.340 Initialization complete. Launching workers. 00:10:28.340 ======================================================== 00:10:28.340 Latency(us) 00:10:28.340 Device Information : IOPS MiB/s Average min max 00:10:28.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30942.88 15.11 4136.50 1580.50 10350.99 00:10:28.340 ======================================================== 00:10:28.340 Total : 30942.88 15.11 4136.50 1580.50 10350.99 00:10:28.340 00:10:28.340 true 00:10:28.340 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 699771 00:10:28.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (699771) - No such process 00:10:28.340 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 699771 00:10:28.340 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.601 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.863 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:28.863 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:28.863 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:28.863 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:28.863 10:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:28.863 null0 00:10:28.863 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:28.863 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:28.863 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:29.124 null1 00:10:29.124 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.124 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.124 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:29.124 null2 00:10:29.385 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.385 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.385 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:29.385 null3 00:10:29.385 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.385 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.385 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:29.646 null4 00:10:29.646 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.646 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.646 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:29.646 null5 00:10:29.907 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.907 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.907 10:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:29.907 null6 00:10:29.907 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.907 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.907 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:30.169 null7 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 706307 706308 706310 706312 706314 706316 706318 706320 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:30.169 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.432 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.693 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:30.693 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.694 10:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.956 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:31.217 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.217 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.217 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:31.217 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.217 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:31.218 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.479 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.740 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.741 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:31.741 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.741 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.741 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.741 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.741 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.741 10:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.003 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:32.264 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.265 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.265 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:32.265 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.265 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.265 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.265 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:32.525 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.525 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.525 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.525 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:32.525 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.525 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.526 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:32.787 10:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.787 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.787 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.787 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:32.787 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.787 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.787 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:32.787 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.787 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.787 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.048 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.309 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.310 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.310 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:33.310 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:33.310 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.310 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.570 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:33.831 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:33.832 rmmod nvme_tcp 00:10:33.832 rmmod nvme_fabrics 00:10:33.832 rmmod nvme_keyring 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 699279 ']' 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 699279 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 699279 ']' 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 699279 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 699279 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 699279' 00:10:33.832 killing process with pid 699279 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 699279 00:10:33.832 [2024-06-10 10:35:57.990660] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:33.832 10:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 699279 00:10:33.832 10:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:33.832 10:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:33.832 10:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:33.832 10:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:33.832 10:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:33.832 10:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.832 10:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.832 10:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.381 10:36:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:36.381 00:10:36.381 real 0m47.682s 00:10:36.381 user 3m13.973s 00:10:36.381 sys 0m16.494s 00:10:36.381 10:36:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:36.381 10:36:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.381 ************************************ 00:10:36.381 END TEST nvmf_ns_hotplug_stress 00:10:36.381 ************************************ 00:10:36.381 10:36:00 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:36.381 10:36:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:36.381 10:36:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:36.381 10:36:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.381 ************************************ 00:10:36.381 START TEST nvmf_connect_stress 00:10:36.381 ************************************ 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:36.381 * Looking for test storage... 00:10:36.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.381 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:36.382 10:36:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:44.533 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:44.533 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:44.533 Found net devices under 0000:31:00.0: cvl_0_0 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:44.533 Found net devices under 0000:31:00.1: cvl_0_1 00:10:44.533 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:44.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:10:44.534 00:10:44.534 --- 10.0.0.2 ping statistics --- 00:10:44.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.534 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:10:44.534 00:10:44.534 --- 10.0.0.1 ping statistics --- 00:10:44.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.534 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=711604 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 711604 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 711604 ']' 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:44.534 10:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.534 [2024-06-10 10:36:07.904251] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:10:44.534 [2024-06-10 10:36:07.904312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.534 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.534 [2024-06-10 10:36:07.995323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:44.534 [2024-06-10 10:36:08.088764] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.534 [2024-06-10 10:36:08.088823] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.534 [2024-06-10 10:36:08.088831] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.534 [2024-06-10 10:36:08.088839] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.534 [2024-06-10 10:36:08.088845] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.534 [2024-06-10 10:36:08.088986] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.534 [2024-06-10 10:36:08.089151] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.534 [2024-06-10 10:36:08.089152] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.534 [2024-06-10 10:36:08.734249] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.534 [2024-06-10 10:36:08.758477] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:44.534 [2024-06-10 10:36:08.774366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.534 NULL1 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=711784 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:44.534 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.535 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:44.796 10:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.057 10:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:45.057 10:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:45.057 10:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.057 10:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:45.057 10:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.319 10:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:45.319 10:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:45.319 10:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.319 10:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:45.319 10:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.580 10:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:45.580 10:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:45.580 10:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.580 10:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:45.580 10:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.153 10:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:46.153 10:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:46.153 10:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.153 10:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:46.153 10:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.414 10:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:46.414 10:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:46.414 10:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.414 10:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:46.414 10:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.676 10:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:46.676 10:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:46.676 10:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.676 10:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:46.676 10:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.936 10:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:46.936 10:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:46.936 10:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.936 10:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:46.936 10:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.508 10:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.508 10:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:47.508 10:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.508 10:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.508 10:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.768 10:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:47.768 10:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:47.768 10:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.768 10:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:47.768 10:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 10:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:48.028 10:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:48.028 10:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.028 10:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:48.028 10:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.288 10:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:48.288 10:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:48.288 10:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.288 10:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:48.288 10:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.550 10:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:48.550 10:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:48.550 10:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.550 10:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:48.550 10:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.121 10:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.121 10:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:49.122 10:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.122 10:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.122 10:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.382 10:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.382 10:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:49.382 10:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.382 10:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.382 10:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.642 10:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.642 10:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:49.642 10:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.642 10:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.642 10:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.903 10:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:49.903 10:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:49.903 10:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.903 10:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:49.903 10:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.164 10:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.164 10:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:50.164 10:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.164 10:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.164 10:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.735 10:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.735 10:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:50.735 10:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.735 10:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.735 10:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.995 10:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.995 10:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:50.995 10:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.995 10:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.995 10:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.255 10:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.255 10:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:51.255 10:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.255 10:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.255 10:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.515 10:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.515 10:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:51.515 10:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.515 10:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.515 10:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.776 10:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.776 10:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:51.776 10:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.776 10:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.776 10:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.347 10:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.347 10:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:52.347 10:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.347 10:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.347 10:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.608 10:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.608 10:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:52.608 10:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.608 10:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.608 10:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.868 10:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:52.868 10:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:52.868 10:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.868 10:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:52.868 10:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.129 10:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.129 10:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:53.129 10:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.129 10:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.129 10:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.701 10:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.701 10:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:53.701 10:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.701 10:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.701 10:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.961 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:53.961 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:53.961 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.961 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:53.961 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.222 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.222 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:54.222 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.222 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.222 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.483 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.483 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:54.483 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.483 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:54.483 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.744 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 711784 00:10:54.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (711784) - No such process 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 711784 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.744 10:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.744 rmmod nvme_tcp 00:10:54.744 rmmod nvme_fabrics 00:10:54.744 rmmod nvme_keyring 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 711604 ']' 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 711604 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 711604 ']' 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 711604 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 711604 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 711604' 00:10:55.005 killing process with pid 711604 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 711604 00:10:55.005 [2024-06-10 10:36:19.099790] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 711604 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.005 10:36:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.552 10:36:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:57.553 00:10:57.553 real 0m21.029s 00:10:57.553 user 0m42.119s 00:10:57.553 sys 0m8.781s 00:10:57.553 10:36:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:57.553 10:36:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.553 ************************************ 00:10:57.553 END TEST nvmf_connect_stress 00:10:57.553 ************************************ 00:10:57.553 10:36:21 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:57.553 10:36:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:10:57.553 10:36:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:57.553 10:36:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.553 ************************************ 00:10:57.553 START TEST nvmf_fused_ordering 00:10:57.553 ************************************ 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:57.553 * Looking for test storage... 00:10:57.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.553 10:36:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.228 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.490 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:04.490 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:04.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:04.491 Found net devices under 0000:31:00.0: cvl_0_0 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:04.491 Found net devices under 0000:31:00.1: cvl_0_1 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.491 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:11:04.754 00:11:04.754 --- 10.0.0.2 ping statistics --- 00:11:04.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.754 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:11:04.754 00:11:04.754 --- 10.0.0.1 ping statistics --- 00:11:04.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.754 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=718537 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 718537 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 718537 ']' 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:04.754 10:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.754 [2024-06-10 10:36:28.929427] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:11:04.754 [2024-06-10 10:36:28.929500] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.754 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.754 [2024-06-10 10:36:29.017019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.016 [2024-06-10 10:36:29.109035] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.016 [2024-06-10 10:36:29.109098] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.016 [2024-06-10 10:36:29.109106] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.016 [2024-06-10 10:36:29.109113] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.016 [2024-06-10 10:36:29.109119] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.016 [2024-06-10 10:36:29.109147] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.588 [2024-06-10 10:36:29.760459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.588 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 [2024-06-10 10:36:29.776434] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:05.589 [2024-06-10 10:36:29.776709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 NULL1 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:05.589 10:36:29 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:05.589 [2024-06-10 10:36:29.832352] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:11:05.589 [2024-06-10 10:36:29.832405] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718618 ] 00:11:05.589 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.170 Attached to nqn.2016-06.io.spdk:cnode1 00:11:06.170 Namespace ID: 1 size: 1GB 00:11:06.170 fused_ordering(0) 00:11:06.171 fused_ordering(1) 00:11:06.171 fused_ordering(2) 00:11:06.171 fused_ordering(3) 00:11:06.171 fused_ordering(4) 00:11:06.171 fused_ordering(5) 00:11:06.171 fused_ordering(6) 00:11:06.171 fused_ordering(7) 00:11:06.171 fused_ordering(8) 00:11:06.171 fused_ordering(9) 00:11:06.171 fused_ordering(10) 00:11:06.171 fused_ordering(11) 00:11:06.171 fused_ordering(12) 00:11:06.171 fused_ordering(13) 00:11:06.171 fused_ordering(14) 00:11:06.171 fused_ordering(15) 00:11:06.171 fused_ordering(16) 00:11:06.171 fused_ordering(17) 00:11:06.171 fused_ordering(18) 00:11:06.171 fused_ordering(19) 00:11:06.171 fused_ordering(20) 00:11:06.171 fused_ordering(21) 00:11:06.171 fused_ordering(22) 00:11:06.171 fused_ordering(23) 00:11:06.171 fused_ordering(24) 00:11:06.171 fused_ordering(25) 00:11:06.171 fused_ordering(26) 00:11:06.171 fused_ordering(27) 00:11:06.171 fused_ordering(28) 00:11:06.171 fused_ordering(29) 00:11:06.171 fused_ordering(30) 00:11:06.171 fused_ordering(31) 00:11:06.171 fused_ordering(32) 00:11:06.171 fused_ordering(33) 00:11:06.171 fused_ordering(34) 00:11:06.171 fused_ordering(35) 00:11:06.171 fused_ordering(36) 00:11:06.171 fused_ordering(37) 00:11:06.171 fused_ordering(38) 00:11:06.171 fused_ordering(39) 00:11:06.171 fused_ordering(40) 00:11:06.171 fused_ordering(41) 00:11:06.171 fused_ordering(42) 00:11:06.171 fused_ordering(43) 00:11:06.171 fused_ordering(44) 00:11:06.171 fused_ordering(45) 00:11:06.171 fused_ordering(46) 00:11:06.171 fused_ordering(47) 00:11:06.171 fused_ordering(48) 00:11:06.171 fused_ordering(49) 00:11:06.171 fused_ordering(50) 00:11:06.171 fused_ordering(51) 00:11:06.171 fused_ordering(52) 00:11:06.171 fused_ordering(53) 00:11:06.171 fused_ordering(54) 00:11:06.171 fused_ordering(55) 00:11:06.171 fused_ordering(56) 00:11:06.171 fused_ordering(57) 00:11:06.171 fused_ordering(58) 00:11:06.171 fused_ordering(59) 00:11:06.171 fused_ordering(60) 00:11:06.171 fused_ordering(61) 00:11:06.171 fused_ordering(62) 00:11:06.171 fused_ordering(63) 00:11:06.171 fused_ordering(64) 00:11:06.171 fused_ordering(65) 00:11:06.171 fused_ordering(66) 00:11:06.171 fused_ordering(67) 00:11:06.171 fused_ordering(68) 00:11:06.171 fused_ordering(69) 00:11:06.171 fused_ordering(70) 00:11:06.171 fused_ordering(71) 00:11:06.171 fused_ordering(72) 00:11:06.171 fused_ordering(73) 00:11:06.171 fused_ordering(74) 00:11:06.171 fused_ordering(75) 00:11:06.171 fused_ordering(76) 00:11:06.171 fused_ordering(77) 00:11:06.171 fused_ordering(78) 00:11:06.171 fused_ordering(79) 00:11:06.171 fused_ordering(80) 00:11:06.171 fused_ordering(81) 00:11:06.171 fused_ordering(82) 00:11:06.171 fused_ordering(83) 00:11:06.171 fused_ordering(84) 00:11:06.171 fused_ordering(85) 00:11:06.171 fused_ordering(86) 00:11:06.171 fused_ordering(87) 00:11:06.171 fused_ordering(88) 00:11:06.171 fused_ordering(89) 00:11:06.171 fused_ordering(90) 00:11:06.171 fused_ordering(91) 00:11:06.171 fused_ordering(92) 00:11:06.171 fused_ordering(93) 00:11:06.171 fused_ordering(94) 00:11:06.171 fused_ordering(95) 00:11:06.171 fused_ordering(96) 00:11:06.171 fused_ordering(97) 00:11:06.171 fused_ordering(98) 00:11:06.171 fused_ordering(99) 00:11:06.171 fused_ordering(100) 00:11:06.171 fused_ordering(101) 00:11:06.171 fused_ordering(102) 00:11:06.171 fused_ordering(103) 00:11:06.171 fused_ordering(104) 00:11:06.171 fused_ordering(105) 00:11:06.171 fused_ordering(106) 00:11:06.171 fused_ordering(107) 00:11:06.171 fused_ordering(108) 00:11:06.171 fused_ordering(109) 00:11:06.171 fused_ordering(110) 00:11:06.171 fused_ordering(111) 00:11:06.171 fused_ordering(112) 00:11:06.171 fused_ordering(113) 00:11:06.171 fused_ordering(114) 00:11:06.171 fused_ordering(115) 00:11:06.171 fused_ordering(116) 00:11:06.171 fused_ordering(117) 00:11:06.171 fused_ordering(118) 00:11:06.171 fused_ordering(119) 00:11:06.171 fused_ordering(120) 00:11:06.171 fused_ordering(121) 00:11:06.171 fused_ordering(122) 00:11:06.171 fused_ordering(123) 00:11:06.171 fused_ordering(124) 00:11:06.171 fused_ordering(125) 00:11:06.171 fused_ordering(126) 00:11:06.171 fused_ordering(127) 00:11:06.171 fused_ordering(128) 00:11:06.171 fused_ordering(129) 00:11:06.171 fused_ordering(130) 00:11:06.171 fused_ordering(131) 00:11:06.171 fused_ordering(132) 00:11:06.171 fused_ordering(133) 00:11:06.171 fused_ordering(134) 00:11:06.171 fused_ordering(135) 00:11:06.171 fused_ordering(136) 00:11:06.171 fused_ordering(137) 00:11:06.171 fused_ordering(138) 00:11:06.171 fused_ordering(139) 00:11:06.171 fused_ordering(140) 00:11:06.171 fused_ordering(141) 00:11:06.171 fused_ordering(142) 00:11:06.171 fused_ordering(143) 00:11:06.171 fused_ordering(144) 00:11:06.171 fused_ordering(145) 00:11:06.171 fused_ordering(146) 00:11:06.171 fused_ordering(147) 00:11:06.171 fused_ordering(148) 00:11:06.171 fused_ordering(149) 00:11:06.171 fused_ordering(150) 00:11:06.171 fused_ordering(151) 00:11:06.171 fused_ordering(152) 00:11:06.171 fused_ordering(153) 00:11:06.171 fused_ordering(154) 00:11:06.171 fused_ordering(155) 00:11:06.171 fused_ordering(156) 00:11:06.171 fused_ordering(157) 00:11:06.171 fused_ordering(158) 00:11:06.171 fused_ordering(159) 00:11:06.171 fused_ordering(160) 00:11:06.171 fused_ordering(161) 00:11:06.171 fused_ordering(162) 00:11:06.171 fused_ordering(163) 00:11:06.171 fused_ordering(164) 00:11:06.171 fused_ordering(165) 00:11:06.171 fused_ordering(166) 00:11:06.171 fused_ordering(167) 00:11:06.171 fused_ordering(168) 00:11:06.171 fused_ordering(169) 00:11:06.171 fused_ordering(170) 00:11:06.171 fused_ordering(171) 00:11:06.171 fused_ordering(172) 00:11:06.171 fused_ordering(173) 00:11:06.171 fused_ordering(174) 00:11:06.171 fused_ordering(175) 00:11:06.171 fused_ordering(176) 00:11:06.171 fused_ordering(177) 00:11:06.171 fused_ordering(178) 00:11:06.171 fused_ordering(179) 00:11:06.171 fused_ordering(180) 00:11:06.171 fused_ordering(181) 00:11:06.171 fused_ordering(182) 00:11:06.171 fused_ordering(183) 00:11:06.171 fused_ordering(184) 00:11:06.171 fused_ordering(185) 00:11:06.171 fused_ordering(186) 00:11:06.171 fused_ordering(187) 00:11:06.171 fused_ordering(188) 00:11:06.171 fused_ordering(189) 00:11:06.171 fused_ordering(190) 00:11:06.171 fused_ordering(191) 00:11:06.171 fused_ordering(192) 00:11:06.171 fused_ordering(193) 00:11:06.171 fused_ordering(194) 00:11:06.171 fused_ordering(195) 00:11:06.171 fused_ordering(196) 00:11:06.171 fused_ordering(197) 00:11:06.171 fused_ordering(198) 00:11:06.171 fused_ordering(199) 00:11:06.171 fused_ordering(200) 00:11:06.171 fused_ordering(201) 00:11:06.171 fused_ordering(202) 00:11:06.171 fused_ordering(203) 00:11:06.171 fused_ordering(204) 00:11:06.171 fused_ordering(205) 00:11:06.433 fused_ordering(206) 00:11:06.433 fused_ordering(207) 00:11:06.433 fused_ordering(208) 00:11:06.433 fused_ordering(209) 00:11:06.433 fused_ordering(210) 00:11:06.433 fused_ordering(211) 00:11:06.433 fused_ordering(212) 00:11:06.433 fused_ordering(213) 00:11:06.433 fused_ordering(214) 00:11:06.433 fused_ordering(215) 00:11:06.433 fused_ordering(216) 00:11:06.433 fused_ordering(217) 00:11:06.433 fused_ordering(218) 00:11:06.433 fused_ordering(219) 00:11:06.433 fused_ordering(220) 00:11:06.433 fused_ordering(221) 00:11:06.433 fused_ordering(222) 00:11:06.433 fused_ordering(223) 00:11:06.433 fused_ordering(224) 00:11:06.433 fused_ordering(225) 00:11:06.434 fused_ordering(226) 00:11:06.434 fused_ordering(227) 00:11:06.434 fused_ordering(228) 00:11:06.434 fused_ordering(229) 00:11:06.434 fused_ordering(230) 00:11:06.434 fused_ordering(231) 00:11:06.434 fused_ordering(232) 00:11:06.434 fused_ordering(233) 00:11:06.434 fused_ordering(234) 00:11:06.434 fused_ordering(235) 00:11:06.434 fused_ordering(236) 00:11:06.434 fused_ordering(237) 00:11:06.434 fused_ordering(238) 00:11:06.434 fused_ordering(239) 00:11:06.434 fused_ordering(240) 00:11:06.434 fused_ordering(241) 00:11:06.434 fused_ordering(242) 00:11:06.434 fused_ordering(243) 00:11:06.434 fused_ordering(244) 00:11:06.434 fused_ordering(245) 00:11:06.434 fused_ordering(246) 00:11:06.434 fused_ordering(247) 00:11:06.434 fused_ordering(248) 00:11:06.434 fused_ordering(249) 00:11:06.434 fused_ordering(250) 00:11:06.434 fused_ordering(251) 00:11:06.434 fused_ordering(252) 00:11:06.434 fused_ordering(253) 00:11:06.434 fused_ordering(254) 00:11:06.434 fused_ordering(255) 00:11:06.434 fused_ordering(256) 00:11:06.434 fused_ordering(257) 00:11:06.434 fused_ordering(258) 00:11:06.434 fused_ordering(259) 00:11:06.434 fused_ordering(260) 00:11:06.434 fused_ordering(261) 00:11:06.434 fused_ordering(262) 00:11:06.434 fused_ordering(263) 00:11:06.434 fused_ordering(264) 00:11:06.434 fused_ordering(265) 00:11:06.434 fused_ordering(266) 00:11:06.434 fused_ordering(267) 00:11:06.434 fused_ordering(268) 00:11:06.434 fused_ordering(269) 00:11:06.434 fused_ordering(270) 00:11:06.434 fused_ordering(271) 00:11:06.434 fused_ordering(272) 00:11:06.434 fused_ordering(273) 00:11:06.434 fused_ordering(274) 00:11:06.434 fused_ordering(275) 00:11:06.434 fused_ordering(276) 00:11:06.434 fused_ordering(277) 00:11:06.434 fused_ordering(278) 00:11:06.434 fused_ordering(279) 00:11:06.434 fused_ordering(280) 00:11:06.434 fused_ordering(281) 00:11:06.434 fused_ordering(282) 00:11:06.434 fused_ordering(283) 00:11:06.434 fused_ordering(284) 00:11:06.434 fused_ordering(285) 00:11:06.434 fused_ordering(286) 00:11:06.434 fused_ordering(287) 00:11:06.434 fused_ordering(288) 00:11:06.434 fused_ordering(289) 00:11:06.434 fused_ordering(290) 00:11:06.434 fused_ordering(291) 00:11:06.434 fused_ordering(292) 00:11:06.434 fused_ordering(293) 00:11:06.434 fused_ordering(294) 00:11:06.434 fused_ordering(295) 00:11:06.434 fused_ordering(296) 00:11:06.434 fused_ordering(297) 00:11:06.434 fused_ordering(298) 00:11:06.434 fused_ordering(299) 00:11:06.434 fused_ordering(300) 00:11:06.434 fused_ordering(301) 00:11:06.434 fused_ordering(302) 00:11:06.434 fused_ordering(303) 00:11:06.434 fused_ordering(304) 00:11:06.434 fused_ordering(305) 00:11:06.434 fused_ordering(306) 00:11:06.434 fused_ordering(307) 00:11:06.434 fused_ordering(308) 00:11:06.434 fused_ordering(309) 00:11:06.434 fused_ordering(310) 00:11:06.434 fused_ordering(311) 00:11:06.434 fused_ordering(312) 00:11:06.434 fused_ordering(313) 00:11:06.434 fused_ordering(314) 00:11:06.434 fused_ordering(315) 00:11:06.434 fused_ordering(316) 00:11:06.434 fused_ordering(317) 00:11:06.434 fused_ordering(318) 00:11:06.434 fused_ordering(319) 00:11:06.434 fused_ordering(320) 00:11:06.434 fused_ordering(321) 00:11:06.434 fused_ordering(322) 00:11:06.434 fused_ordering(323) 00:11:06.434 fused_ordering(324) 00:11:06.434 fused_ordering(325) 00:11:06.434 fused_ordering(326) 00:11:06.434 fused_ordering(327) 00:11:06.434 fused_ordering(328) 00:11:06.434 fused_ordering(329) 00:11:06.434 fused_ordering(330) 00:11:06.434 fused_ordering(331) 00:11:06.434 fused_ordering(332) 00:11:06.434 fused_ordering(333) 00:11:06.434 fused_ordering(334) 00:11:06.434 fused_ordering(335) 00:11:06.434 fused_ordering(336) 00:11:06.434 fused_ordering(337) 00:11:06.434 fused_ordering(338) 00:11:06.434 fused_ordering(339) 00:11:06.434 fused_ordering(340) 00:11:06.434 fused_ordering(341) 00:11:06.434 fused_ordering(342) 00:11:06.434 fused_ordering(343) 00:11:06.434 fused_ordering(344) 00:11:06.434 fused_ordering(345) 00:11:06.434 fused_ordering(346) 00:11:06.434 fused_ordering(347) 00:11:06.434 fused_ordering(348) 00:11:06.434 fused_ordering(349) 00:11:06.434 fused_ordering(350) 00:11:06.434 fused_ordering(351) 00:11:06.434 fused_ordering(352) 00:11:06.434 fused_ordering(353) 00:11:06.434 fused_ordering(354) 00:11:06.434 fused_ordering(355) 00:11:06.434 fused_ordering(356) 00:11:06.434 fused_ordering(357) 00:11:06.434 fused_ordering(358) 00:11:06.434 fused_ordering(359) 00:11:06.434 fused_ordering(360) 00:11:06.434 fused_ordering(361) 00:11:06.434 fused_ordering(362) 00:11:06.434 fused_ordering(363) 00:11:06.434 fused_ordering(364) 00:11:06.434 fused_ordering(365) 00:11:06.434 fused_ordering(366) 00:11:06.434 fused_ordering(367) 00:11:06.434 fused_ordering(368) 00:11:06.434 fused_ordering(369) 00:11:06.434 fused_ordering(370) 00:11:06.434 fused_ordering(371) 00:11:06.434 fused_ordering(372) 00:11:06.434 fused_ordering(373) 00:11:06.434 fused_ordering(374) 00:11:06.434 fused_ordering(375) 00:11:06.434 fused_ordering(376) 00:11:06.434 fused_ordering(377) 00:11:06.434 fused_ordering(378) 00:11:06.434 fused_ordering(379) 00:11:06.434 fused_ordering(380) 00:11:06.434 fused_ordering(381) 00:11:06.434 fused_ordering(382) 00:11:06.434 fused_ordering(383) 00:11:06.434 fused_ordering(384) 00:11:06.434 fused_ordering(385) 00:11:06.434 fused_ordering(386) 00:11:06.434 fused_ordering(387) 00:11:06.434 fused_ordering(388) 00:11:06.434 fused_ordering(389) 00:11:06.434 fused_ordering(390) 00:11:06.434 fused_ordering(391) 00:11:06.434 fused_ordering(392) 00:11:06.434 fused_ordering(393) 00:11:06.434 fused_ordering(394) 00:11:06.434 fused_ordering(395) 00:11:06.434 fused_ordering(396) 00:11:06.434 fused_ordering(397) 00:11:06.434 fused_ordering(398) 00:11:06.434 fused_ordering(399) 00:11:06.434 fused_ordering(400) 00:11:06.434 fused_ordering(401) 00:11:06.434 fused_ordering(402) 00:11:06.434 fused_ordering(403) 00:11:06.434 fused_ordering(404) 00:11:06.434 fused_ordering(405) 00:11:06.434 fused_ordering(406) 00:11:06.434 fused_ordering(407) 00:11:06.434 fused_ordering(408) 00:11:06.434 fused_ordering(409) 00:11:06.434 fused_ordering(410) 00:11:07.007 fused_ordering(411) 00:11:07.007 fused_ordering(412) 00:11:07.007 fused_ordering(413) 00:11:07.007 fused_ordering(414) 00:11:07.007 fused_ordering(415) 00:11:07.007 fused_ordering(416) 00:11:07.007 fused_ordering(417) 00:11:07.007 fused_ordering(418) 00:11:07.007 fused_ordering(419) 00:11:07.007 fused_ordering(420) 00:11:07.007 fused_ordering(421) 00:11:07.007 fused_ordering(422) 00:11:07.007 fused_ordering(423) 00:11:07.007 fused_ordering(424) 00:11:07.007 fused_ordering(425) 00:11:07.007 fused_ordering(426) 00:11:07.007 fused_ordering(427) 00:11:07.007 fused_ordering(428) 00:11:07.007 fused_ordering(429) 00:11:07.007 fused_ordering(430) 00:11:07.007 fused_ordering(431) 00:11:07.007 fused_ordering(432) 00:11:07.007 fused_ordering(433) 00:11:07.007 fused_ordering(434) 00:11:07.007 fused_ordering(435) 00:11:07.007 fused_ordering(436) 00:11:07.007 fused_ordering(437) 00:11:07.007 fused_ordering(438) 00:11:07.007 fused_ordering(439) 00:11:07.007 fused_ordering(440) 00:11:07.007 fused_ordering(441) 00:11:07.007 fused_ordering(442) 00:11:07.007 fused_ordering(443) 00:11:07.007 fused_ordering(444) 00:11:07.007 fused_ordering(445) 00:11:07.007 fused_ordering(446) 00:11:07.007 fused_ordering(447) 00:11:07.007 fused_ordering(448) 00:11:07.007 fused_ordering(449) 00:11:07.007 fused_ordering(450) 00:11:07.007 fused_ordering(451) 00:11:07.007 fused_ordering(452) 00:11:07.007 fused_ordering(453) 00:11:07.007 fused_ordering(454) 00:11:07.007 fused_ordering(455) 00:11:07.007 fused_ordering(456) 00:11:07.007 fused_ordering(457) 00:11:07.007 fused_ordering(458) 00:11:07.007 fused_ordering(459) 00:11:07.007 fused_ordering(460) 00:11:07.007 fused_ordering(461) 00:11:07.007 fused_ordering(462) 00:11:07.007 fused_ordering(463) 00:11:07.007 fused_ordering(464) 00:11:07.007 fused_ordering(465) 00:11:07.007 fused_ordering(466) 00:11:07.007 fused_ordering(467) 00:11:07.007 fused_ordering(468) 00:11:07.007 fused_ordering(469) 00:11:07.007 fused_ordering(470) 00:11:07.007 fused_ordering(471) 00:11:07.007 fused_ordering(472) 00:11:07.007 fused_ordering(473) 00:11:07.007 fused_ordering(474) 00:11:07.007 fused_ordering(475) 00:11:07.007 fused_ordering(476) 00:11:07.007 fused_ordering(477) 00:11:07.007 fused_ordering(478) 00:11:07.007 fused_ordering(479) 00:11:07.007 fused_ordering(480) 00:11:07.007 fused_ordering(481) 00:11:07.007 fused_ordering(482) 00:11:07.007 fused_ordering(483) 00:11:07.007 fused_ordering(484) 00:11:07.007 fused_ordering(485) 00:11:07.007 fused_ordering(486) 00:11:07.007 fused_ordering(487) 00:11:07.007 fused_ordering(488) 00:11:07.007 fused_ordering(489) 00:11:07.007 fused_ordering(490) 00:11:07.007 fused_ordering(491) 00:11:07.007 fused_ordering(492) 00:11:07.007 fused_ordering(493) 00:11:07.007 fused_ordering(494) 00:11:07.007 fused_ordering(495) 00:11:07.007 fused_ordering(496) 00:11:07.007 fused_ordering(497) 00:11:07.007 fused_ordering(498) 00:11:07.007 fused_ordering(499) 00:11:07.007 fused_ordering(500) 00:11:07.007 fused_ordering(501) 00:11:07.007 fused_ordering(502) 00:11:07.007 fused_ordering(503) 00:11:07.007 fused_ordering(504) 00:11:07.007 fused_ordering(505) 00:11:07.007 fused_ordering(506) 00:11:07.007 fused_ordering(507) 00:11:07.007 fused_ordering(508) 00:11:07.007 fused_ordering(509) 00:11:07.007 fused_ordering(510) 00:11:07.007 fused_ordering(511) 00:11:07.007 fused_ordering(512) 00:11:07.007 fused_ordering(513) 00:11:07.007 fused_ordering(514) 00:11:07.007 fused_ordering(515) 00:11:07.007 fused_ordering(516) 00:11:07.007 fused_ordering(517) 00:11:07.007 fused_ordering(518) 00:11:07.007 fused_ordering(519) 00:11:07.007 fused_ordering(520) 00:11:07.007 fused_ordering(521) 00:11:07.007 fused_ordering(522) 00:11:07.007 fused_ordering(523) 00:11:07.007 fused_ordering(524) 00:11:07.007 fused_ordering(525) 00:11:07.007 fused_ordering(526) 00:11:07.007 fused_ordering(527) 00:11:07.007 fused_ordering(528) 00:11:07.007 fused_ordering(529) 00:11:07.007 fused_ordering(530) 00:11:07.007 fused_ordering(531) 00:11:07.007 fused_ordering(532) 00:11:07.007 fused_ordering(533) 00:11:07.007 fused_ordering(534) 00:11:07.007 fused_ordering(535) 00:11:07.007 fused_ordering(536) 00:11:07.007 fused_ordering(537) 00:11:07.007 fused_ordering(538) 00:11:07.007 fused_ordering(539) 00:11:07.007 fused_ordering(540) 00:11:07.007 fused_ordering(541) 00:11:07.007 fused_ordering(542) 00:11:07.007 fused_ordering(543) 00:11:07.007 fused_ordering(544) 00:11:07.007 fused_ordering(545) 00:11:07.007 fused_ordering(546) 00:11:07.007 fused_ordering(547) 00:11:07.007 fused_ordering(548) 00:11:07.007 fused_ordering(549) 00:11:07.007 fused_ordering(550) 00:11:07.007 fused_ordering(551) 00:11:07.007 fused_ordering(552) 00:11:07.007 fused_ordering(553) 00:11:07.007 fused_ordering(554) 00:11:07.007 fused_ordering(555) 00:11:07.007 fused_ordering(556) 00:11:07.007 fused_ordering(557) 00:11:07.007 fused_ordering(558) 00:11:07.007 fused_ordering(559) 00:11:07.007 fused_ordering(560) 00:11:07.007 fused_ordering(561) 00:11:07.007 fused_ordering(562) 00:11:07.007 fused_ordering(563) 00:11:07.007 fused_ordering(564) 00:11:07.007 fused_ordering(565) 00:11:07.007 fused_ordering(566) 00:11:07.007 fused_ordering(567) 00:11:07.007 fused_ordering(568) 00:11:07.007 fused_ordering(569) 00:11:07.007 fused_ordering(570) 00:11:07.007 fused_ordering(571) 00:11:07.007 fused_ordering(572) 00:11:07.007 fused_ordering(573) 00:11:07.007 fused_ordering(574) 00:11:07.007 fused_ordering(575) 00:11:07.007 fused_ordering(576) 00:11:07.007 fused_ordering(577) 00:11:07.007 fused_ordering(578) 00:11:07.007 fused_ordering(579) 00:11:07.007 fused_ordering(580) 00:11:07.007 fused_ordering(581) 00:11:07.007 fused_ordering(582) 00:11:07.007 fused_ordering(583) 00:11:07.007 fused_ordering(584) 00:11:07.007 fused_ordering(585) 00:11:07.007 fused_ordering(586) 00:11:07.007 fused_ordering(587) 00:11:07.007 fused_ordering(588) 00:11:07.007 fused_ordering(589) 00:11:07.007 fused_ordering(590) 00:11:07.007 fused_ordering(591) 00:11:07.007 fused_ordering(592) 00:11:07.007 fused_ordering(593) 00:11:07.007 fused_ordering(594) 00:11:07.007 fused_ordering(595) 00:11:07.007 fused_ordering(596) 00:11:07.007 fused_ordering(597) 00:11:07.007 fused_ordering(598) 00:11:07.007 fused_ordering(599) 00:11:07.007 fused_ordering(600) 00:11:07.007 fused_ordering(601) 00:11:07.007 fused_ordering(602) 00:11:07.007 fused_ordering(603) 00:11:07.007 fused_ordering(604) 00:11:07.007 fused_ordering(605) 00:11:07.007 fused_ordering(606) 00:11:07.007 fused_ordering(607) 00:11:07.007 fused_ordering(608) 00:11:07.007 fused_ordering(609) 00:11:07.007 fused_ordering(610) 00:11:07.007 fused_ordering(611) 00:11:07.007 fused_ordering(612) 00:11:07.007 fused_ordering(613) 00:11:07.007 fused_ordering(614) 00:11:07.007 fused_ordering(615) 00:11:07.579 fused_ordering(616) 00:11:07.579 fused_ordering(617) 00:11:07.579 fused_ordering(618) 00:11:07.579 fused_ordering(619) 00:11:07.579 fused_ordering(620) 00:11:07.579 fused_ordering(621) 00:11:07.579 fused_ordering(622) 00:11:07.579 fused_ordering(623) 00:11:07.579 fused_ordering(624) 00:11:07.579 fused_ordering(625) 00:11:07.579 fused_ordering(626) 00:11:07.579 fused_ordering(627) 00:11:07.579 fused_ordering(628) 00:11:07.579 fused_ordering(629) 00:11:07.579 fused_ordering(630) 00:11:07.579 fused_ordering(631) 00:11:07.579 fused_ordering(632) 00:11:07.579 fused_ordering(633) 00:11:07.579 fused_ordering(634) 00:11:07.579 fused_ordering(635) 00:11:07.579 fused_ordering(636) 00:11:07.579 fused_ordering(637) 00:11:07.579 fused_ordering(638) 00:11:07.579 fused_ordering(639) 00:11:07.579 fused_ordering(640) 00:11:07.579 fused_ordering(641) 00:11:07.579 fused_ordering(642) 00:11:07.579 fused_ordering(643) 00:11:07.579 fused_ordering(644) 00:11:07.579 fused_ordering(645) 00:11:07.579 fused_ordering(646) 00:11:07.579 fused_ordering(647) 00:11:07.579 fused_ordering(648) 00:11:07.579 fused_ordering(649) 00:11:07.579 fused_ordering(650) 00:11:07.579 fused_ordering(651) 00:11:07.579 fused_ordering(652) 00:11:07.579 fused_ordering(653) 00:11:07.579 fused_ordering(654) 00:11:07.579 fused_ordering(655) 00:11:07.579 fused_ordering(656) 00:11:07.579 fused_ordering(657) 00:11:07.579 fused_ordering(658) 00:11:07.579 fused_ordering(659) 00:11:07.579 fused_ordering(660) 00:11:07.579 fused_ordering(661) 00:11:07.579 fused_ordering(662) 00:11:07.579 fused_ordering(663) 00:11:07.579 fused_ordering(664) 00:11:07.579 fused_ordering(665) 00:11:07.579 fused_ordering(666) 00:11:07.579 fused_ordering(667) 00:11:07.579 fused_ordering(668) 00:11:07.579 fused_ordering(669) 00:11:07.579 fused_ordering(670) 00:11:07.579 fused_ordering(671) 00:11:07.579 fused_ordering(672) 00:11:07.579 fused_ordering(673) 00:11:07.579 fused_ordering(674) 00:11:07.579 fused_ordering(675) 00:11:07.579 fused_ordering(676) 00:11:07.579 fused_ordering(677) 00:11:07.579 fused_ordering(678) 00:11:07.579 fused_ordering(679) 00:11:07.579 fused_ordering(680) 00:11:07.579 fused_ordering(681) 00:11:07.579 fused_ordering(682) 00:11:07.579 fused_ordering(683) 00:11:07.579 fused_ordering(684) 00:11:07.579 fused_ordering(685) 00:11:07.579 fused_ordering(686) 00:11:07.579 fused_ordering(687) 00:11:07.579 fused_ordering(688) 00:11:07.579 fused_ordering(689) 00:11:07.579 fused_ordering(690) 00:11:07.579 fused_ordering(691) 00:11:07.579 fused_ordering(692) 00:11:07.579 fused_ordering(693) 00:11:07.579 fused_ordering(694) 00:11:07.579 fused_ordering(695) 00:11:07.579 fused_ordering(696) 00:11:07.579 fused_ordering(697) 00:11:07.579 fused_ordering(698) 00:11:07.579 fused_ordering(699) 00:11:07.579 fused_ordering(700) 00:11:07.579 fused_ordering(701) 00:11:07.579 fused_ordering(702) 00:11:07.579 fused_ordering(703) 00:11:07.579 fused_ordering(704) 00:11:07.579 fused_ordering(705) 00:11:07.579 fused_ordering(706) 00:11:07.579 fused_ordering(707) 00:11:07.579 fused_ordering(708) 00:11:07.579 fused_ordering(709) 00:11:07.579 fused_ordering(710) 00:11:07.579 fused_ordering(711) 00:11:07.579 fused_ordering(712) 00:11:07.579 fused_ordering(713) 00:11:07.579 fused_ordering(714) 00:11:07.579 fused_ordering(715) 00:11:07.579 fused_ordering(716) 00:11:07.579 fused_ordering(717) 00:11:07.579 fused_ordering(718) 00:11:07.579 fused_ordering(719) 00:11:07.579 fused_ordering(720) 00:11:07.579 fused_ordering(721) 00:11:07.579 fused_ordering(722) 00:11:07.579 fused_ordering(723) 00:11:07.579 fused_ordering(724) 00:11:07.579 fused_ordering(725) 00:11:07.579 fused_ordering(726) 00:11:07.579 fused_ordering(727) 00:11:07.579 fused_ordering(728) 00:11:07.579 fused_ordering(729) 00:11:07.579 fused_ordering(730) 00:11:07.579 fused_ordering(731) 00:11:07.579 fused_ordering(732) 00:11:07.579 fused_ordering(733) 00:11:07.579 fused_ordering(734) 00:11:07.579 fused_ordering(735) 00:11:07.579 fused_ordering(736) 00:11:07.579 fused_ordering(737) 00:11:07.579 fused_ordering(738) 00:11:07.579 fused_ordering(739) 00:11:07.579 fused_ordering(740) 00:11:07.579 fused_ordering(741) 00:11:07.579 fused_ordering(742) 00:11:07.579 fused_ordering(743) 00:11:07.579 fused_ordering(744) 00:11:07.579 fused_ordering(745) 00:11:07.579 fused_ordering(746) 00:11:07.579 fused_ordering(747) 00:11:07.579 fused_ordering(748) 00:11:07.579 fused_ordering(749) 00:11:07.579 fused_ordering(750) 00:11:07.579 fused_ordering(751) 00:11:07.579 fused_ordering(752) 00:11:07.579 fused_ordering(753) 00:11:07.579 fused_ordering(754) 00:11:07.579 fused_ordering(755) 00:11:07.579 fused_ordering(756) 00:11:07.579 fused_ordering(757) 00:11:07.579 fused_ordering(758) 00:11:07.579 fused_ordering(759) 00:11:07.579 fused_ordering(760) 00:11:07.579 fused_ordering(761) 00:11:07.579 fused_ordering(762) 00:11:07.579 fused_ordering(763) 00:11:07.579 fused_ordering(764) 00:11:07.579 fused_ordering(765) 00:11:07.579 fused_ordering(766) 00:11:07.579 fused_ordering(767) 00:11:07.579 fused_ordering(768) 00:11:07.580 fused_ordering(769) 00:11:07.580 fused_ordering(770) 00:11:07.580 fused_ordering(771) 00:11:07.580 fused_ordering(772) 00:11:07.580 fused_ordering(773) 00:11:07.580 fused_ordering(774) 00:11:07.580 fused_ordering(775) 00:11:07.580 fused_ordering(776) 00:11:07.580 fused_ordering(777) 00:11:07.580 fused_ordering(778) 00:11:07.580 fused_ordering(779) 00:11:07.580 fused_ordering(780) 00:11:07.580 fused_ordering(781) 00:11:07.580 fused_ordering(782) 00:11:07.580 fused_ordering(783) 00:11:07.580 fused_ordering(784) 00:11:07.580 fused_ordering(785) 00:11:07.580 fused_ordering(786) 00:11:07.580 fused_ordering(787) 00:11:07.580 fused_ordering(788) 00:11:07.580 fused_ordering(789) 00:11:07.580 fused_ordering(790) 00:11:07.580 fused_ordering(791) 00:11:07.580 fused_ordering(792) 00:11:07.580 fused_ordering(793) 00:11:07.580 fused_ordering(794) 00:11:07.580 fused_ordering(795) 00:11:07.580 fused_ordering(796) 00:11:07.580 fused_ordering(797) 00:11:07.580 fused_ordering(798) 00:11:07.580 fused_ordering(799) 00:11:07.580 fused_ordering(800) 00:11:07.580 fused_ordering(801) 00:11:07.580 fused_ordering(802) 00:11:07.580 fused_ordering(803) 00:11:07.580 fused_ordering(804) 00:11:07.580 fused_ordering(805) 00:11:07.580 fused_ordering(806) 00:11:07.580 fused_ordering(807) 00:11:07.580 fused_ordering(808) 00:11:07.580 fused_ordering(809) 00:11:07.580 fused_ordering(810) 00:11:07.580 fused_ordering(811) 00:11:07.580 fused_ordering(812) 00:11:07.580 fused_ordering(813) 00:11:07.580 fused_ordering(814) 00:11:07.580 fused_ordering(815) 00:11:07.580 fused_ordering(816) 00:11:07.580 fused_ordering(817) 00:11:07.580 fused_ordering(818) 00:11:07.580 fused_ordering(819) 00:11:07.580 fused_ordering(820) 00:11:08.152 fused_ordering(821) 00:11:08.152 fused_ordering(822) 00:11:08.152 fused_ordering(823) 00:11:08.152 fused_ordering(824) 00:11:08.152 fused_ordering(825) 00:11:08.152 fused_ordering(826) 00:11:08.152 fused_ordering(827) 00:11:08.152 fused_ordering(828) 00:11:08.152 fused_ordering(829) 00:11:08.152 fused_ordering(830) 00:11:08.152 fused_ordering(831) 00:11:08.152 fused_ordering(832) 00:11:08.152 fused_ordering(833) 00:11:08.152 fused_ordering(834) 00:11:08.152 fused_ordering(835) 00:11:08.152 fused_ordering(836) 00:11:08.152 fused_ordering(837) 00:11:08.152 fused_ordering(838) 00:11:08.152 fused_ordering(839) 00:11:08.152 fused_ordering(840) 00:11:08.152 fused_ordering(841) 00:11:08.152 fused_ordering(842) 00:11:08.152 fused_ordering(843) 00:11:08.152 fused_ordering(844) 00:11:08.152 fused_ordering(845) 00:11:08.152 fused_ordering(846) 00:11:08.152 fused_ordering(847) 00:11:08.152 fused_ordering(848) 00:11:08.152 fused_ordering(849) 00:11:08.152 fused_ordering(850) 00:11:08.152 fused_ordering(851) 00:11:08.152 fused_ordering(852) 00:11:08.152 fused_ordering(853) 00:11:08.152 fused_ordering(854) 00:11:08.152 fused_ordering(855) 00:11:08.152 fused_ordering(856) 00:11:08.152 fused_ordering(857) 00:11:08.152 fused_ordering(858) 00:11:08.152 fused_ordering(859) 00:11:08.152 fused_ordering(860) 00:11:08.152 fused_ordering(861) 00:11:08.152 fused_ordering(862) 00:11:08.152 fused_ordering(863) 00:11:08.152 fused_ordering(864) 00:11:08.152 fused_ordering(865) 00:11:08.152 fused_ordering(866) 00:11:08.152 fused_ordering(867) 00:11:08.152 fused_ordering(868) 00:11:08.152 fused_ordering(869) 00:11:08.152 fused_ordering(870) 00:11:08.152 fused_ordering(871) 00:11:08.152 fused_ordering(872) 00:11:08.152 fused_ordering(873) 00:11:08.152 fused_ordering(874) 00:11:08.152 fused_ordering(875) 00:11:08.152 fused_ordering(876) 00:11:08.152 fused_ordering(877) 00:11:08.152 fused_ordering(878) 00:11:08.152 fused_ordering(879) 00:11:08.152 fused_ordering(880) 00:11:08.152 fused_ordering(881) 00:11:08.152 fused_ordering(882) 00:11:08.152 fused_ordering(883) 00:11:08.152 fused_ordering(884) 00:11:08.152 fused_ordering(885) 00:11:08.152 fused_ordering(886) 00:11:08.152 fused_ordering(887) 00:11:08.152 fused_ordering(888) 00:11:08.152 fused_ordering(889) 00:11:08.152 fused_ordering(890) 00:11:08.152 fused_ordering(891) 00:11:08.152 fused_ordering(892) 00:11:08.152 fused_ordering(893) 00:11:08.152 fused_ordering(894) 00:11:08.152 fused_ordering(895) 00:11:08.152 fused_ordering(896) 00:11:08.152 fused_ordering(897) 00:11:08.152 fused_ordering(898) 00:11:08.152 fused_ordering(899) 00:11:08.152 fused_ordering(900) 00:11:08.152 fused_ordering(901) 00:11:08.152 fused_ordering(902) 00:11:08.152 fused_ordering(903) 00:11:08.152 fused_ordering(904) 00:11:08.152 fused_ordering(905) 00:11:08.152 fused_ordering(906) 00:11:08.152 fused_ordering(907) 00:11:08.152 fused_ordering(908) 00:11:08.152 fused_ordering(909) 00:11:08.152 fused_ordering(910) 00:11:08.152 fused_ordering(911) 00:11:08.152 fused_ordering(912) 00:11:08.152 fused_ordering(913) 00:11:08.152 fused_ordering(914) 00:11:08.152 fused_ordering(915) 00:11:08.152 fused_ordering(916) 00:11:08.152 fused_ordering(917) 00:11:08.152 fused_ordering(918) 00:11:08.152 fused_ordering(919) 00:11:08.152 fused_ordering(920) 00:11:08.152 fused_ordering(921) 00:11:08.152 fused_ordering(922) 00:11:08.152 fused_ordering(923) 00:11:08.152 fused_ordering(924) 00:11:08.152 fused_ordering(925) 00:11:08.152 fused_ordering(926) 00:11:08.152 fused_ordering(927) 00:11:08.152 fused_ordering(928) 00:11:08.152 fused_ordering(929) 00:11:08.152 fused_ordering(930) 00:11:08.152 fused_ordering(931) 00:11:08.152 fused_ordering(932) 00:11:08.152 fused_ordering(933) 00:11:08.152 fused_ordering(934) 00:11:08.152 fused_ordering(935) 00:11:08.152 fused_ordering(936) 00:11:08.152 fused_ordering(937) 00:11:08.152 fused_ordering(938) 00:11:08.152 fused_ordering(939) 00:11:08.152 fused_ordering(940) 00:11:08.152 fused_ordering(941) 00:11:08.152 fused_ordering(942) 00:11:08.152 fused_ordering(943) 00:11:08.152 fused_ordering(944) 00:11:08.152 fused_ordering(945) 00:11:08.152 fused_ordering(946) 00:11:08.152 fused_ordering(947) 00:11:08.152 fused_ordering(948) 00:11:08.152 fused_ordering(949) 00:11:08.152 fused_ordering(950) 00:11:08.152 fused_ordering(951) 00:11:08.153 fused_ordering(952) 00:11:08.153 fused_ordering(953) 00:11:08.153 fused_ordering(954) 00:11:08.153 fused_ordering(955) 00:11:08.153 fused_ordering(956) 00:11:08.153 fused_ordering(957) 00:11:08.153 fused_ordering(958) 00:11:08.153 fused_ordering(959) 00:11:08.153 fused_ordering(960) 00:11:08.153 fused_ordering(961) 00:11:08.153 fused_ordering(962) 00:11:08.153 fused_ordering(963) 00:11:08.153 fused_ordering(964) 00:11:08.153 fused_ordering(965) 00:11:08.153 fused_ordering(966) 00:11:08.153 fused_ordering(967) 00:11:08.153 fused_ordering(968) 00:11:08.153 fused_ordering(969) 00:11:08.153 fused_ordering(970) 00:11:08.153 fused_ordering(971) 00:11:08.153 fused_ordering(972) 00:11:08.153 fused_ordering(973) 00:11:08.153 fused_ordering(974) 00:11:08.153 fused_ordering(975) 00:11:08.153 fused_ordering(976) 00:11:08.153 fused_ordering(977) 00:11:08.153 fused_ordering(978) 00:11:08.153 fused_ordering(979) 00:11:08.153 fused_ordering(980) 00:11:08.153 fused_ordering(981) 00:11:08.153 fused_ordering(982) 00:11:08.153 fused_ordering(983) 00:11:08.153 fused_ordering(984) 00:11:08.153 fused_ordering(985) 00:11:08.153 fused_ordering(986) 00:11:08.153 fused_ordering(987) 00:11:08.153 fused_ordering(988) 00:11:08.153 fused_ordering(989) 00:11:08.153 fused_ordering(990) 00:11:08.153 fused_ordering(991) 00:11:08.153 fused_ordering(992) 00:11:08.153 fused_ordering(993) 00:11:08.153 fused_ordering(994) 00:11:08.153 fused_ordering(995) 00:11:08.153 fused_ordering(996) 00:11:08.153 fused_ordering(997) 00:11:08.153 fused_ordering(998) 00:11:08.153 fused_ordering(999) 00:11:08.153 fused_ordering(1000) 00:11:08.153 fused_ordering(1001) 00:11:08.153 fused_ordering(1002) 00:11:08.153 fused_ordering(1003) 00:11:08.153 fused_ordering(1004) 00:11:08.153 fused_ordering(1005) 00:11:08.153 fused_ordering(1006) 00:11:08.153 fused_ordering(1007) 00:11:08.153 fused_ordering(1008) 00:11:08.153 fused_ordering(1009) 00:11:08.153 fused_ordering(1010) 00:11:08.153 fused_ordering(1011) 00:11:08.153 fused_ordering(1012) 00:11:08.153 fused_ordering(1013) 00:11:08.153 fused_ordering(1014) 00:11:08.153 fused_ordering(1015) 00:11:08.153 fused_ordering(1016) 00:11:08.153 fused_ordering(1017) 00:11:08.153 fused_ordering(1018) 00:11:08.153 fused_ordering(1019) 00:11:08.153 fused_ordering(1020) 00:11:08.153 fused_ordering(1021) 00:11:08.153 fused_ordering(1022) 00:11:08.153 fused_ordering(1023) 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.153 rmmod nvme_tcp 00:11:08.153 rmmod nvme_fabrics 00:11:08.153 rmmod nvme_keyring 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 718537 ']' 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 718537 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 718537 ']' 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 718537 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 718537 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 718537' 00:11:08.153 killing process with pid 718537 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 718537 00:11:08.153 [2024-06-10 10:36:32.410600] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:08.153 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 718537 00:11:08.413 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.414 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.414 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.414 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.414 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.414 10:36:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.414 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.414 10:36:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.326 10:36:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.326 00:11:10.326 real 0m13.224s 00:11:10.326 user 0m7.027s 00:11:10.326 sys 0m7.026s 00:11:10.326 10:36:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:10.326 10:36:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.326 ************************************ 00:11:10.326 END TEST nvmf_fused_ordering 00:11:10.326 ************************************ 00:11:10.587 10:36:34 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:10.587 10:36:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:10.587 10:36:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:10.587 10:36:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.587 ************************************ 00:11:10.587 START TEST nvmf_delete_subsystem 00:11:10.587 ************************************ 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:10.587 * Looking for test storage... 00:11:10.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.587 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.588 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.588 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:10.588 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:10.588 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.588 10:36:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:18.747 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:18.747 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:18.747 Found net devices under 0000:31:00.0: cvl_0_0 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.747 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:18.748 Found net devices under 0000:31:00.1: cvl_0_1 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:18.748 10:36:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:11:18.748 00:11:18.748 --- 10.0.0.2 ping statistics --- 00:11:18.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.748 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:11:18.748 00:11:18.748 --- 10.0.0.1 ping statistics --- 00:11:18.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.748 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=723357 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 723357 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 723357 ']' 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.748 [2024-06-10 10:36:42.148047] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:11:18.748 [2024-06-10 10:36:42.148095] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.748 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.748 [2024-06-10 10:36:42.209159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:18.748 [2024-06-10 10:36:42.275763] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.748 [2024-06-10 10:36:42.275798] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.748 [2024-06-10 10:36:42.275806] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.748 [2024-06-10 10:36:42.275812] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.748 [2024-06-10 10:36:42.275818] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.748 [2024-06-10 10:36:42.275954] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.748 [2024-06-10 10:36:42.275955] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:18.748 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.749 [2024-06-10 10:36:42.987383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.749 10:36:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.749 [2024-06-10 10:36:43.011383] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:18.749 [2024-06-10 10:36:43.011557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.749 NULL1 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:18.749 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.011 Delay0 00:11:19.011 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:19.011 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.011 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:19.011 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.011 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:19.011 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=723644 00:11:19.011 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:19.011 10:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:19.011 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.011 [2024-06-10 10:36:43.108277] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:20.926 10:36:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.926 10:36:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:20.926 10:36:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 starting I/O failed: -6 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 [2024-06-10 10:36:45.191377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e94e90 is same with the state(5) to be set 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Write completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.926 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 [2024-06-10 10:36:45.192899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e97650 is same with the state(5) to be set 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 starting I/O failed: -6 00:11:20.927 [2024-06-10 10:36:45.196562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fca4c00c470 is same with the state(5) to be set 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Write completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:20.927 Read completed with error (sct=0, sc=8) 00:11:22.314 [2024-06-10 10:36:46.165363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e73500 is same with the state(5) to be set 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 [2024-06-10 10:36:46.195130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e93d00 is same with the state(5) to be set 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 [2024-06-10 10:36:46.195210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e94cb0 is same with the state(5) to be set 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 [2024-06-10 10:36:46.198350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fca4c00bfe0 is same with the state(5) to be set 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Write completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 Read completed with error (sct=0, sc=8) 00:11:22.314 [2024-06-10 10:36:46.199155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fca4c00c780 is same with the state(5) to be set 00:11:22.314 Initializing NVMe Controllers 00:11:22.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:22.314 Controller IO queue size 128, less than required. 00:11:22.314 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:22.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:22.314 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:22.314 Initialization complete. Launching workers. 00:11:22.314 ======================================================== 00:11:22.314 Latency(us) 00:11:22.314 Device Information : IOPS MiB/s Average min max 00:11:22.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 159.45 0.08 918821.21 700.42 1006613.10 00:11:22.314 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.96 0.08 947180.35 259.20 2002313.01 00:11:22.314 ======================================================== 00:11:22.314 Total : 315.41 0.15 932843.98 259.20 2002313.01 00:11:22.314 00:11:22.314 [2024-06-10 10:36:46.199760] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e73500 (9): Bad file descriptor 00:11:22.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:22.314 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:22.314 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:22.314 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 723644 00:11:22.314 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 723644 00:11:22.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (723644) - No such process 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 723644 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 723644 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 723644 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.576 [2024-06-10 10:36:46.732266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=724324 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724324 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:22.576 10:36:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:22.576 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.576 [2024-06-10 10:36:46.798697] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:23.148 10:36:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.148 10:36:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724324 00:11:23.148 10:36:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.719 10:36:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.719 10:36:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724324 00:11:23.719 10:36:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.979 10:36:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.979 10:36:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724324 00:11:23.979 10:36:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.550 10:36:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.550 10:36:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724324 00:11:24.550 10:36:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.119 10:36:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.119 10:36:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724324 00:11:25.119 10:36:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.691 10:36:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.691 10:36:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724324 00:11:25.691 10:36:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.691 Initializing NVMe Controllers 00:11:25.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.691 Controller IO queue size 128, less than required. 00:11:25.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:25.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:25.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:25.691 Initialization complete. Launching workers. 00:11:25.691 ======================================================== 00:11:25.691 Latency(us) 00:11:25.691 Device Information : IOPS MiB/s Average min max 00:11:25.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002246.61 1000244.35 1008037.47 00:11:25.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002977.91 1000317.14 1009041.23 00:11:25.691 ======================================================== 00:11:25.691 Total : 256.00 0.12 1002612.26 1000244.35 1009041.23 00:11:25.691 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724324 00:11:26.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (724324) - No such process 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 724324 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.262 rmmod nvme_tcp 00:11:26.262 rmmod nvme_fabrics 00:11:26.262 rmmod nvme_keyring 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 723357 ']' 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 723357 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 723357 ']' 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 723357 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:11:26.262 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:26.263 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 723357 00:11:26.263 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:26.263 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:26.263 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 723357' 00:11:26.263 killing process with pid 723357 00:11:26.263 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 723357 00:11:26.263 [2024-06-10 10:36:50.418056] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:26.263 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 723357 00:11:26.525 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.525 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.525 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.525 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.525 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.525 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.525 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.525 10:36:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.440 10:36:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.440 00:11:28.440 real 0m17.956s 00:11:28.440 user 0m30.533s 00:11:28.440 sys 0m6.288s 00:11:28.440 10:36:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:28.440 10:36:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.440 ************************************ 00:11:28.440 END TEST nvmf_delete_subsystem 00:11:28.440 ************************************ 00:11:28.440 10:36:52 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.440 10:36:52 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:28.440 10:36:52 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:28.440 10:36:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.440 ************************************ 00:11:28.440 START TEST nvmf_ns_masking 00:11:28.440 ************************************ 00:11:28.440 10:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.702 * Looking for test storage... 00:11:28.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:28.702 10:36:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=63d66759-2757-42ec-9abc-ac91bae3ff9e 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.703 10:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:36.874 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:36.874 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:36.874 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:36.875 Found net devices under 0000:31:00.0: cvl_0_0 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:36.875 Found net devices under 0000:31:00.1: cvl_0_1 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:36.875 10:36:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:36.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:11:36.875 00:11:36.875 --- 10.0.0.2 ping statistics --- 00:11:36.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.875 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:11:36.875 00:11:36.875 --- 10.0.0.1 ping statistics --- 00:11:36.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.875 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=729383 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 729383 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 729383 ']' 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:36.875 [2024-06-10 10:37:00.204448] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:11:36.875 [2024-06-10 10:37:00.204497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.875 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.875 [2024-06-10 10:37:00.272115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.875 [2024-06-10 10:37:00.338233] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.875 [2024-06-10 10:37:00.338274] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.875 [2024-06-10 10:37:00.338282] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.875 [2024-06-10 10:37:00.338288] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.875 [2024-06-10 10:37:00.338294] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.875 [2024-06-10 10:37:00.338364] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.875 [2024-06-10 10:37:00.338480] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.875 [2024-06-10 10:37:00.338639] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.875 [2024-06-10 10:37:00.338640] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:36.875 10:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:36.875 10:37:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.875 10:37:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:36.875 [2024-06-10 10:37:01.152232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.137 10:37:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:37.137 10:37:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:37.137 10:37:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:37.137 Malloc1 00:11:37.137 10:37:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:37.397 Malloc2 00:11:37.397 10:37:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.659 10:37:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:37.659 10:37:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.919 [2024-06-10 10:37:02.021113] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:37.919 [2024-06-10 10:37:02.021365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.919 10:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:37.919 10:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 63d66759-2757-42ec-9abc-ac91bae3ff9e -a 10.0.0.2 -s 4420 -i 4 00:11:37.919 10:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.919 10:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:37.919 10:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.919 10:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:11:37.919 10:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.462 [ 0]:0x1 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=35e40832a4f7497398301b3b70d68a6e 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 35e40832a4f7497398301b3b70d68a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.462 [ 0]:0x1 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=35e40832a4f7497398301b3b70d68a6e 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 35e40832a4f7497398301b3b70d68a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:40.462 [ 1]:0x2 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b2f8e9062fe84282b8ac773be6556b92 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b2f8e9062fe84282b8ac773be6556b92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.462 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.724 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:40.724 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:40.724 10:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 63d66759-2757-42ec-9abc-ac91bae3ff9e -a 10.0.0.2 -s 4420 -i 4 00:11:40.985 10:37:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:40.985 10:37:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:40.985 10:37:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.985 10:37:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:11:40.985 10:37:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:11:40.985 10:37:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:42.910 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:42.910 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:42.910 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.910 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:42.910 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.910 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:42.910 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:43.171 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:43.172 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.172 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.172 [ 0]:0x2 00:11:43.172 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.172 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.172 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b2f8e9062fe84282b8ac773be6556b92 00:11:43.172 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b2f8e9062fe84282b8ac773be6556b92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.172 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.433 [ 0]:0x1 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=35e40832a4f7497398301b3b70d68a6e 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 35e40832a4f7497398301b3b70d68a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.433 [ 1]:0x2 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.433 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b2f8e9062fe84282b8ac773be6556b92 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b2f8e9062fe84282b8ac773be6556b92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.695 10:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.955 [ 0]:0x2 00:11:43.955 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.955 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.955 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b2f8e9062fe84282b8ac773be6556b92 00:11:43.955 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b2f8e9062fe84282b8ac773be6556b92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.955 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:43.955 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.955 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.216 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:44.216 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 63d66759-2757-42ec-9abc-ac91bae3ff9e -a 10.0.0.2 -s 4420 -i 4 00:11:44.216 10:37:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:44.216 10:37:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:11:44.216 10:37:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.216 10:37:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:11:44.216 10:37:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:11:44.216 10:37:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.784 [ 0]:0x1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=35e40832a4f7497398301b3b70d68a6e 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 35e40832a4f7497398301b3b70d68a6e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.784 [ 1]:0x2 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b2f8e9062fe84282b8ac773be6556b92 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b2f8e9062fe84282b8ac773be6556b92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.784 [ 0]:0x2 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b2f8e9062fe84282b8ac773be6556b92 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b2f8e9062fe84282b8ac773be6556b92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:46.784 10:37:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.784 [2024-06-10 10:37:11.053277] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:47.109 request: 00:11:47.109 { 00:11:47.109 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.109 "nsid": 2, 00:11:47.109 "host": "nqn.2016-06.io.spdk:host1", 00:11:47.109 "method": "nvmf_ns_remove_host", 00:11:47.109 "req_id": 1 00:11:47.109 } 00:11:47.109 Got JSON-RPC error response 00:11:47.109 response: 00:11:47.109 { 00:11:47.109 "code": -32602, 00:11:47.109 "message": "Invalid parameters" 00:11:47.109 } 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:47.109 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:47.110 [ 0]:0x2 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b2f8e9062fe84282b8ac773be6556b92 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b2f8e9062fe84282b8ac773be6556b92 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.110 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.370 rmmod nvme_tcp 00:11:47.370 rmmod nvme_fabrics 00:11:47.370 rmmod nvme_keyring 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 729383 ']' 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 729383 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 729383 ']' 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 729383 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:47.370 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 729383 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 729383' 00:11:47.632 killing process with pid 729383 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 729383 00:11:47.632 [2024-06-10 10:37:11.660904] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 729383 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.632 10:37:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.179 10:37:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.179 00:11:50.179 real 0m21.183s 00:11:50.179 user 0m50.334s 00:11:50.179 sys 0m6.978s 00:11:50.179 10:37:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:50.179 10:37:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.179 ************************************ 00:11:50.179 END TEST nvmf_ns_masking 00:11:50.179 ************************************ 00:11:50.179 10:37:13 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:50.179 10:37:13 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:50.179 10:37:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:50.179 10:37:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:50.179 10:37:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.179 ************************************ 00:11:50.179 START TEST nvmf_nvme_cli 00:11:50.179 ************************************ 00:11:50.179 10:37:13 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:50.179 * Looking for test storage... 00:11:50.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.179 10:37:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:58.324 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:58.324 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:58.324 Found net devices under 0000:31:00.0: cvl_0_0 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:58.324 Found net devices under 0000:31:00.1: cvl_0_1 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.742 ms 00:11:58.324 00:11:58.324 --- 10.0.0.2 ping statistics --- 00:11:58.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.324 rtt min/avg/max/mdev = 0.742/0.742/0.742/0.000 ms 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.450 ms 00:11:58.324 00:11:58.324 --- 10.0.0.1 ping statistics --- 00:11:58.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.324 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.324 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=735959 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 735959 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 735959 ']' 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:58.325 10:37:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 [2024-06-10 10:37:21.533692] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:11:58.325 [2024-06-10 10:37:21.533755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.325 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.325 [2024-06-10 10:37:21.606198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.325 [2024-06-10 10:37:21.680937] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.325 [2024-06-10 10:37:21.680976] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.325 [2024-06-10 10:37:21.680984] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.325 [2024-06-10 10:37:21.680990] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.325 [2024-06-10 10:37:21.680995] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.325 [2024-06-10 10:37:21.681137] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.325 [2024-06-10 10:37:21.681271] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.325 [2024-06-10 10:37:21.681370] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.325 [2024-06-10 10:37:21.681371] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 [2024-06-10 10:37:22.363796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 Malloc0 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 Malloc1 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 [2024-06-10 10:37:22.453534] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:58.325 [2024-06-10 10:37:22.453773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:11:58.325 00:11:58.325 Discovery Log Number of Records 2, Generation counter 2 00:11:58.325 =====Discovery Log Entry 0====== 00:11:58.325 trtype: tcp 00:11:58.325 adrfam: ipv4 00:11:58.325 subtype: current discovery subsystem 00:11:58.325 treq: not required 00:11:58.325 portid: 0 00:11:58.325 trsvcid: 4420 00:11:58.325 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.325 traddr: 10.0.0.2 00:11:58.325 eflags: explicit discovery connections, duplicate discovery information 00:11:58.325 sectype: none 00:11:58.325 =====Discovery Log Entry 1====== 00:11:58.325 trtype: tcp 00:11:58.325 adrfam: ipv4 00:11:58.325 subtype: nvme subsystem 00:11:58.325 treq: not required 00:11:58.325 portid: 0 00:11:58.325 trsvcid: 4420 00:11:58.325 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:58.325 traddr: 10.0.0.2 00:11:58.325 eflags: none 00:11:58.325 sectype: none 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:58.325 10:37:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.238 10:37:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:00.238 10:37:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:12:00.238 10:37:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.238 10:37:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:12:00.238 10:37:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:12:00.238 10:37:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:12:02.152 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:02.152 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:02.153 /dev/nvme0n1 ]] 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:02.153 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.723 rmmod nvme_tcp 00:12:02.723 rmmod nvme_fabrics 00:12:02.723 rmmod nvme_keyring 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:02.723 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 735959 ']' 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 735959 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 735959 ']' 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 735959 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 735959 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 735959' 00:12:02.724 killing process with pid 735959 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 735959 00:12:02.724 [2024-06-10 10:37:26.892420] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:02.724 10:37:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 735959 00:12:02.985 10:37:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.985 10:37:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.985 10:37:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.985 10:37:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.985 10:37:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.985 10:37:27 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.985 10:37:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.985 10:37:27 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.924 10:37:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:04.924 00:12:04.924 real 0m15.154s 00:12:04.924 user 0m23.504s 00:12:04.924 sys 0m6.047s 00:12:04.924 10:37:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:04.924 10:37:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:04.924 ************************************ 00:12:04.924 END TEST nvmf_nvme_cli 00:12:04.924 ************************************ 00:12:04.924 10:37:29 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:04.924 10:37:29 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:04.924 10:37:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:04.924 10:37:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:04.924 10:37:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:04.924 ************************************ 00:12:04.924 START TEST nvmf_vfio_user 00:12:04.924 ************************************ 00:12:04.924 10:37:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:05.186 * Looking for test storage... 00:12:05.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=737673 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 737673' 00:12:05.186 Process pid: 737673 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:05.186 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 737673 00:12:05.187 10:37:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:05.187 10:37:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 737673 ']' 00:12:05.187 10:37:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.187 10:37:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:05.187 10:37:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.187 10:37:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:05.187 10:37:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:05.187 [2024-06-10 10:37:29.399065] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:12:05.187 [2024-06-10 10:37:29.399136] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.187 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.187 [2024-06-10 10:37:29.466493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.448 [2024-06-10 10:37:29.541772] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.448 [2024-06-10 10:37:29.541813] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.448 [2024-06-10 10:37:29.541821] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.448 [2024-06-10 10:37:29.541828] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.448 [2024-06-10 10:37:29.541833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.448 [2024-06-10 10:37:29.542397] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.448 [2024-06-10 10:37:29.542590] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.448 [2024-06-10 10:37:29.542916] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.448 [2024-06-10 10:37:29.542917] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.019 10:37:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:06.019 10:37:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:12:06.019 10:37:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:06.961 10:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:07.222 10:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:07.222 10:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:07.222 10:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.222 10:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:07.222 10:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:07.483 Malloc1 00:12:07.483 10:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:07.483 10:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:07.744 10:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:07.744 [2024-06-10 10:37:32.029294] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:08.005 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:08.005 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:08.005 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:08.005 Malloc2 00:12:08.006 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:08.266 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:08.527 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:08.527 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:08.527 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:08.527 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:08.527 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:08.527 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:08.527 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:08.527 [2024-06-10 10:37:32.768622] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:12:08.527 [2024-06-10 10:37:32.768664] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738408 ] 00:12:08.527 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.527 [2024-06-10 10:37:32.801926] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:08.527 [2024-06-10 10:37:32.810589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:08.527 [2024-06-10 10:37:32.810609] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9ff86e7000 00:12:08.527 [2024-06-10 10:37:32.811590] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.527 [2024-06-10 10:37:32.812590] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.527 [2024-06-10 10:37:32.813593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.527 [2024-06-10 10:37:32.814605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.790 [2024-06-10 10:37:32.815615] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.790 [2024-06-10 10:37:32.816622] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.790 [2024-06-10 10:37:32.817623] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:08.790 [2024-06-10 10:37:32.818628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:08.790 [2024-06-10 10:37:32.819635] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:08.790 [2024-06-10 10:37:32.819647] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9ff86dc000 00:12:08.790 [2024-06-10 10:37:32.820975] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:08.790 [2024-06-10 10:37:32.837894] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:08.790 [2024-06-10 10:37:32.837917] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:08.790 [2024-06-10 10:37:32.842769] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:08.790 [2024-06-10 10:37:32.842815] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:08.790 [2024-06-10 10:37:32.842901] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:08.790 [2024-06-10 10:37:32.842919] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:08.790 [2024-06-10 10:37:32.842928] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:08.790 [2024-06-10 10:37:32.843767] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:08.790 [2024-06-10 10:37:32.843777] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:08.790 [2024-06-10 10:37:32.843784] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:08.790 [2024-06-10 10:37:32.844770] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:08.790 [2024-06-10 10:37:32.844778] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:08.790 [2024-06-10 10:37:32.844785] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:08.790 [2024-06-10 10:37:32.845779] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:08.790 [2024-06-10 10:37:32.845787] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:08.790 [2024-06-10 10:37:32.846782] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:08.790 [2024-06-10 10:37:32.846790] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:08.790 [2024-06-10 10:37:32.846795] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:08.790 [2024-06-10 10:37:32.846802] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:08.790 [2024-06-10 10:37:32.846907] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:08.790 [2024-06-10 10:37:32.846912] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:08.790 [2024-06-10 10:37:32.846917] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:08.790 [2024-06-10 10:37:32.847785] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:08.790 [2024-06-10 10:37:32.848793] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:08.790 [2024-06-10 10:37:32.849795] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:08.790 [2024-06-10 10:37:32.850792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:08.790 [2024-06-10 10:37:32.850845] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:08.790 [2024-06-10 10:37:32.851803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:08.790 [2024-06-10 10:37:32.851810] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:08.790 [2024-06-10 10:37:32.851815] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:08.790 [2024-06-10 10:37:32.851836] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:08.790 [2024-06-10 10:37:32.851846] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:08.790 [2024-06-10 10:37:32.851864] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:08.790 [2024-06-10 10:37:32.851869] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.790 [2024-06-10 10:37:32.851884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.790 [2024-06-10 10:37:32.851915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:08.790 [2024-06-10 10:37:32.851924] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:08.790 [2024-06-10 10:37:32.851929] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:08.790 [2024-06-10 10:37:32.851933] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:08.790 [2024-06-10 10:37:32.851940] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:08.790 [2024-06-10 10:37:32.851944] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:08.790 [2024-06-10 10:37:32.851949] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:08.790 [2024-06-10 10:37:32.851954] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:08.790 [2024-06-10 10:37:32.851962] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:08.790 [2024-06-10 10:37:32.851971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:08.790 [2024-06-10 10:37:32.851982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:08.790 [2024-06-10 10:37:32.851993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.790 [2024-06-10 10:37:32.852002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.790 [2024-06-10 10:37:32.852010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.790 [2024-06-10 10:37:32.852018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.790 [2024-06-10 10:37:32.852022] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:08.790 [2024-06-10 10:37:32.852031] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:08.790 [2024-06-10 10:37:32.852040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:08.790 [2024-06-10 10:37:32.852051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:08.790 [2024-06-10 10:37:32.852056] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:08.790 [2024-06-10 10:37:32.852061] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:08.790 [2024-06-10 10:37:32.852070] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:08.790 [2024-06-10 10:37:32.852076] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:08.790 [2024-06-10 10:37:32.852085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:08.790 [2024-06-10 10:37:32.852092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852141] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852149] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852156] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:08.791 [2024-06-10 10:37:32.852161] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:08.791 [2024-06-10 10:37:32.852167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852186] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:08.791 [2024-06-10 10:37:32.852198] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852206] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852213] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:08.791 [2024-06-10 10:37:32.852217] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.791 [2024-06-10 10:37:32.852223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852259] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852266] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:08.791 [2024-06-10 10:37:32.852270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.791 [2024-06-10 10:37:32.852276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852296] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852303] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852310] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852318] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852328] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:08.791 [2024-06-10 10:37:32.852332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:08.791 [2024-06-10 10:37:32.852337] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:08.791 [2024-06-10 10:37:32.852357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852435] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:08.791 [2024-06-10 10:37:32.852439] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:08.791 [2024-06-10 10:37:32.852443] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:08.791 [2024-06-10 10:37:32.852446] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:08.791 [2024-06-10 10:37:32.852453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:08.791 [2024-06-10 10:37:32.852460] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:08.791 [2024-06-10 10:37:32.852464] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:08.791 [2024-06-10 10:37:32.852470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852477] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:08.791 [2024-06-10 10:37:32.852481] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:08.791 [2024-06-10 10:37:32.852486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852494] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:08.791 [2024-06-10 10:37:32.852498] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:08.791 [2024-06-10 10:37:32.852504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:08.791 [2024-06-10 10:37:32.852513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:08.791 [2024-06-10 10:37:32.852544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:08.791 ===================================================== 00:12:08.791 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:08.791 ===================================================== 00:12:08.791 Controller Capabilities/Features 00:12:08.791 ================================ 00:12:08.791 Vendor ID: 4e58 00:12:08.791 Subsystem Vendor ID: 4e58 00:12:08.791 Serial Number: SPDK1 00:12:08.791 Model Number: SPDK bdev Controller 00:12:08.791 Firmware Version: 24.09 00:12:08.791 Recommended Arb Burst: 6 00:12:08.791 IEEE OUI Identifier: 8d 6b 50 00:12:08.791 Multi-path I/O 00:12:08.791 May have multiple subsystem ports: Yes 00:12:08.791 May have multiple controllers: Yes 00:12:08.791 Associated with SR-IOV VF: No 00:12:08.791 Max Data Transfer Size: 131072 00:12:08.791 Max Number of Namespaces: 32 00:12:08.791 Max Number of I/O Queues: 127 00:12:08.791 NVMe Specification Version (VS): 1.3 00:12:08.791 NVMe Specification Version (Identify): 1.3 00:12:08.791 Maximum Queue Entries: 256 00:12:08.791 Contiguous Queues Required: Yes 00:12:08.791 Arbitration Mechanisms Supported 00:12:08.791 Weighted Round Robin: Not Supported 00:12:08.791 Vendor Specific: Not Supported 00:12:08.791 Reset Timeout: 15000 ms 00:12:08.791 Doorbell Stride: 4 bytes 00:12:08.791 NVM Subsystem Reset: Not Supported 00:12:08.791 Command Sets Supported 00:12:08.791 NVM Command Set: Supported 00:12:08.791 Boot Partition: Not Supported 00:12:08.791 Memory Page Size Minimum: 4096 bytes 00:12:08.791 Memory Page Size Maximum: 4096 bytes 00:12:08.791 Persistent Memory Region: Not Supported 00:12:08.791 Optional Asynchronous Events Supported 00:12:08.791 Namespace Attribute Notices: Supported 00:12:08.791 Firmware Activation Notices: Not Supported 00:12:08.791 ANA Change Notices: Not Supported 00:12:08.791 PLE Aggregate Log Change Notices: Not Supported 00:12:08.791 LBA Status Info Alert Notices: Not Supported 00:12:08.791 EGE Aggregate Log Change Notices: Not Supported 00:12:08.791 Normal NVM Subsystem Shutdown event: Not Supported 00:12:08.791 Zone Descriptor Change Notices: Not Supported 00:12:08.791 Discovery Log Change Notices: Not Supported 00:12:08.791 Controller Attributes 00:12:08.791 128-bit Host Identifier: Supported 00:12:08.791 Non-Operational Permissive Mode: Not Supported 00:12:08.791 NVM Sets: Not Supported 00:12:08.791 Read Recovery Levels: Not Supported 00:12:08.791 Endurance Groups: Not Supported 00:12:08.791 Predictable Latency Mode: Not Supported 00:12:08.791 Traffic Based Keep ALive: Not Supported 00:12:08.791 Namespace Granularity: Not Supported 00:12:08.791 SQ Associations: Not Supported 00:12:08.791 UUID List: Not Supported 00:12:08.791 Multi-Domain Subsystem: Not Supported 00:12:08.791 Fixed Capacity Management: Not Supported 00:12:08.791 Variable Capacity Management: Not Supported 00:12:08.791 Delete Endurance Group: Not Supported 00:12:08.791 Delete NVM Set: Not Supported 00:12:08.791 Extended LBA Formats Supported: Not Supported 00:12:08.791 Flexible Data Placement Supported: Not Supported 00:12:08.791 00:12:08.791 Controller Memory Buffer Support 00:12:08.791 ================================ 00:12:08.791 Supported: No 00:12:08.791 00:12:08.791 Persistent Memory Region Support 00:12:08.791 ================================ 00:12:08.791 Supported: No 00:12:08.791 00:12:08.791 Admin Command Set Attributes 00:12:08.791 ============================ 00:12:08.791 Security Send/Receive: Not Supported 00:12:08.791 Format NVM: Not Supported 00:12:08.791 Firmware Activate/Download: Not Supported 00:12:08.791 Namespace Management: Not Supported 00:12:08.791 Device Self-Test: Not Supported 00:12:08.791 Directives: Not Supported 00:12:08.791 NVMe-MI: Not Supported 00:12:08.791 Virtualization Management: Not Supported 00:12:08.791 Doorbell Buffer Config: Not Supported 00:12:08.791 Get LBA Status Capability: Not Supported 00:12:08.791 Command & Feature Lockdown Capability: Not Supported 00:12:08.791 Abort Command Limit: 4 00:12:08.791 Async Event Request Limit: 4 00:12:08.791 Number of Firmware Slots: N/A 00:12:08.791 Firmware Slot 1 Read-Only: N/A 00:12:08.791 Firmware Activation Without Reset: N/A 00:12:08.791 Multiple Update Detection Support: N/A 00:12:08.791 Firmware Update Granularity: No Information Provided 00:12:08.791 Per-Namespace SMART Log: No 00:12:08.791 Asymmetric Namespace Access Log Page: Not Supported 00:12:08.791 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:08.791 Command Effects Log Page: Supported 00:12:08.791 Get Log Page Extended Data: Supported 00:12:08.791 Telemetry Log Pages: Not Supported 00:12:08.791 Persistent Event Log Pages: Not Supported 00:12:08.791 Supported Log Pages Log Page: May Support 00:12:08.791 Commands Supported & Effects Log Page: Not Supported 00:12:08.791 Feature Identifiers & Effects Log Page:May Support 00:12:08.791 NVMe-MI Commands & Effects Log Page: May Support 00:12:08.791 Data Area 4 for Telemetry Log: Not Supported 00:12:08.791 Error Log Page Entries Supported: 128 00:12:08.791 Keep Alive: Supported 00:12:08.791 Keep Alive Granularity: 10000 ms 00:12:08.791 00:12:08.791 NVM Command Set Attributes 00:12:08.791 ========================== 00:12:08.791 Submission Queue Entry Size 00:12:08.791 Max: 64 00:12:08.791 Min: 64 00:12:08.791 Completion Queue Entry Size 00:12:08.791 Max: 16 00:12:08.791 Min: 16 00:12:08.791 Number of Namespaces: 32 00:12:08.791 Compare Command: Supported 00:12:08.791 Write Uncorrectable Command: Not Supported 00:12:08.791 Dataset Management Command: Supported 00:12:08.791 Write Zeroes Command: Supported 00:12:08.791 Set Features Save Field: Not Supported 00:12:08.791 Reservations: Not Supported 00:12:08.791 Timestamp: Not Supported 00:12:08.791 Copy: Supported 00:12:08.791 Volatile Write Cache: Present 00:12:08.791 Atomic Write Unit (Normal): 1 00:12:08.791 Atomic Write Unit (PFail): 1 00:12:08.791 Atomic Compare & Write Unit: 1 00:12:08.791 Fused Compare & Write: Supported 00:12:08.791 Scatter-Gather List 00:12:08.791 SGL Command Set: Supported (Dword aligned) 00:12:08.792 SGL Keyed: Not Supported 00:12:08.792 SGL Bit Bucket Descriptor: Not Supported 00:12:08.792 SGL Metadata Pointer: Not Supported 00:12:08.792 Oversized SGL: Not Supported 00:12:08.792 SGL Metadata Address: Not Supported 00:12:08.792 SGL Offset: Not Supported 00:12:08.792 Transport SGL Data Block: Not Supported 00:12:08.792 Replay Protected Memory Block: Not Supported 00:12:08.792 00:12:08.792 Firmware Slot Information 00:12:08.792 ========================= 00:12:08.792 Active slot: 1 00:12:08.792 Slot 1 Firmware Revision: 24.09 00:12:08.792 00:12:08.792 00:12:08.792 Commands Supported and Effects 00:12:08.792 ============================== 00:12:08.792 Admin Commands 00:12:08.792 -------------- 00:12:08.792 Get Log Page (02h): Supported 00:12:08.792 Identify (06h): Supported 00:12:08.792 Abort (08h): Supported 00:12:08.792 Set Features (09h): Supported 00:12:08.792 Get Features (0Ah): Supported 00:12:08.792 Asynchronous Event Request (0Ch): Supported 00:12:08.792 Keep Alive (18h): Supported 00:12:08.792 I/O Commands 00:12:08.792 ------------ 00:12:08.792 Flush (00h): Supported LBA-Change 00:12:08.792 Write (01h): Supported LBA-Change 00:12:08.792 Read (02h): Supported 00:12:08.792 Compare (05h): Supported 00:12:08.792 Write Zeroes (08h): Supported LBA-Change 00:12:08.792 Dataset Management (09h): Supported LBA-Change 00:12:08.792 Copy (19h): Supported LBA-Change 00:12:08.792 Unknown (79h): Supported LBA-Change 00:12:08.792 Unknown (7Ah): Supported 00:12:08.792 00:12:08.792 Error Log 00:12:08.792 ========= 00:12:08.792 00:12:08.792 Arbitration 00:12:08.792 =========== 00:12:08.792 Arbitration Burst: 1 00:12:08.792 00:12:08.792 Power Management 00:12:08.792 ================ 00:12:08.792 Number of Power States: 1 00:12:08.792 Current Power State: Power State #0 00:12:08.792 Power State #0: 00:12:08.792 Max Power: 0.00 W 00:12:08.792 Non-Operational State: Operational 00:12:08.792 Entry Latency: Not Reported 00:12:08.792 Exit Latency: Not Reported 00:12:08.792 Relative Read Throughput: 0 00:12:08.792 Relative Read Latency: 0 00:12:08.792 Relative Write Throughput: 0 00:12:08.792 Relative Write Latency: 0 00:12:08.792 Idle Power: Not Reported 00:12:08.792 Active Power: Not Reported 00:12:08.792 Non-Operational Permissive Mode: Not Supported 00:12:08.792 00:12:08.792 Health Information 00:12:08.792 ================== 00:12:08.792 Critical Warnings: 00:12:08.792 Available Spare Space: OK 00:12:08.792 Temperature: OK 00:12:08.792 Device Reliability: OK 00:12:08.792 Read Only: No 00:12:08.792 Volatile Memory Backup: OK 00:12:08.792 Current Temperature: 0 Kelvin (-2[2024-06-10 10:37:32.852643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:08.792 [2024-06-10 10:37:32.852651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:08.792 [2024-06-10 10:37:32.852677] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:08.792 [2024-06-10 10:37:32.852686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.792 [2024-06-10 10:37:32.852693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.792 [2024-06-10 10:37:32.852699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.792 [2024-06-10 10:37:32.852705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.792 [2024-06-10 10:37:32.856253] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:08.792 [2024-06-10 10:37:32.856273] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:08.792 [2024-06-10 10:37:32.856828] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:08.792 [2024-06-10 10:37:32.856866] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:08.792 [2024-06-10 10:37:32.856872] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:08.792 [2024-06-10 10:37:32.857836] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:08.792 [2024-06-10 10:37:32.857846] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:08.792 [2024-06-10 10:37:32.857908] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:08.792 [2024-06-10 10:37:32.859863] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:08.792 73 Celsius) 00:12:08.792 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:08.792 Available Spare: 0% 00:12:08.792 Available Spare Threshold: 0% 00:12:08.792 Life Percentage Used: 0% 00:12:08.792 Data Units Read: 0 00:12:08.792 Data Units Written: 0 00:12:08.792 Host Read Commands: 0 00:12:08.792 Host Write Commands: 0 00:12:08.792 Controller Busy Time: 0 minutes 00:12:08.792 Power Cycles: 0 00:12:08.792 Power On Hours: 0 hours 00:12:08.792 Unsafe Shutdowns: 0 00:12:08.792 Unrecoverable Media Errors: 0 00:12:08.792 Lifetime Error Log Entries: 0 00:12:08.792 Warning Temperature Time: 0 minutes 00:12:08.792 Critical Temperature Time: 0 minutes 00:12:08.792 00:12:08.792 Number of Queues 00:12:08.792 ================ 00:12:08.792 Number of I/O Submission Queues: 127 00:12:08.792 Number of I/O Completion Queues: 127 00:12:08.792 00:12:08.792 Active Namespaces 00:12:08.792 ================= 00:12:08.792 Namespace ID:1 00:12:08.792 Error Recovery Timeout: Unlimited 00:12:08.792 Command Set Identifier: NVM (00h) 00:12:08.792 Deallocate: Supported 00:12:08.792 Deallocated/Unwritten Error: Not Supported 00:12:08.792 Deallocated Read Value: Unknown 00:12:08.792 Deallocate in Write Zeroes: Not Supported 00:12:08.792 Deallocated Guard Field: 0xFFFF 00:12:08.792 Flush: Supported 00:12:08.792 Reservation: Supported 00:12:08.792 Namespace Sharing Capabilities: Multiple Controllers 00:12:08.792 Size (in LBAs): 131072 (0GiB) 00:12:08.792 Capacity (in LBAs): 131072 (0GiB) 00:12:08.792 Utilization (in LBAs): 131072 (0GiB) 00:12:08.792 NGUID: 491A9BA3043B478DA5F9DF849B8240C5 00:12:08.792 UUID: 491a9ba3-043b-478d-a5f9-df849b8240c5 00:12:08.792 Thin Provisioning: Not Supported 00:12:08.792 Per-NS Atomic Units: Yes 00:12:08.792 Atomic Boundary Size (Normal): 0 00:12:08.792 Atomic Boundary Size (PFail): 0 00:12:08.792 Atomic Boundary Offset: 0 00:12:08.792 Maximum Single Source Range Length: 65535 00:12:08.792 Maximum Copy Length: 65535 00:12:08.792 Maximum Source Range Count: 1 00:12:08.792 NGUID/EUI64 Never Reused: No 00:12:08.792 Namespace Write Protected: No 00:12:08.792 Number of LBA Formats: 1 00:12:08.792 Current LBA Format: LBA Format #00 00:12:08.792 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:08.792 00:12:08.792 10:37:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:08.792 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.792 [2024-06-10 10:37:33.043897] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:14.078 Initializing NVMe Controllers 00:12:14.078 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:14.078 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:14.078 Initialization complete. Launching workers. 00:12:14.078 ======================================================== 00:12:14.078 Latency(us) 00:12:14.078 Device Information : IOPS MiB/s Average min max 00:12:14.078 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39940.22 156.02 3204.65 835.50 6819.68 00:12:14.078 ======================================================== 00:12:14.078 Total : 39940.22 156.02 3204.65 835.50 6819.68 00:12:14.078 00:12:14.078 [2024-06-10 10:37:38.065610] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:14.078 10:37:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:14.078 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.078 [2024-06-10 10:37:38.246480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:19.365 Initializing NVMe Controllers 00:12:19.366 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.366 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:19.366 Initialization complete. Launching workers. 00:12:19.366 ======================================================== 00:12:19.366 Latency(us) 00:12:19.366 Device Information : IOPS MiB/s Average min max 00:12:19.366 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.75 6004.76 9958.11 00:12:19.366 ======================================================== 00:12:19.366 Total : 16051.20 62.70 7980.75 6004.76 9958.11 00:12:19.366 00:12:19.366 [2024-06-10 10:37:43.283176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:19.366 10:37:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:19.366 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.366 [2024-06-10 10:37:43.476099] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:24.653 [2024-06-10 10:37:48.585615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:24.653 Initializing NVMe Controllers 00:12:24.653 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.653 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:24.653 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:24.653 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:24.653 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:24.653 Initialization complete. Launching workers. 00:12:24.653 Starting thread on core 2 00:12:24.653 Starting thread on core 3 00:12:24.653 Starting thread on core 1 00:12:24.654 10:37:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:24.654 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.654 [2024-06-10 10:37:48.848676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.954 [2024-06-10 10:37:51.901454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.954 Initializing NVMe Controllers 00:12:27.954 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.954 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.954 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:27.954 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:27.954 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:27.954 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:27.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:27.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:27.954 Initialization complete. Launching workers. 00:12:27.954 Starting thread on core 1 with urgent priority queue 00:12:27.954 Starting thread on core 2 with urgent priority queue 00:12:27.954 Starting thread on core 3 with urgent priority queue 00:12:27.954 Starting thread on core 0 with urgent priority queue 00:12:27.954 SPDK bdev Controller (SPDK1 ) core 0: 10320.67 IO/s 9.69 secs/100000 ios 00:12:27.954 SPDK bdev Controller (SPDK1 ) core 1: 9097.67 IO/s 10.99 secs/100000 ios 00:12:27.954 SPDK bdev Controller (SPDK1 ) core 2: 9351.00 IO/s 10.69 secs/100000 ios 00:12:27.954 SPDK bdev Controller (SPDK1 ) core 3: 11715.67 IO/s 8.54 secs/100000 ios 00:12:27.954 ======================================================== 00:12:27.954 00:12:27.955 10:37:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:27.955 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.955 [2024-06-10 10:37:52.163704] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.955 Initializing NVMe Controllers 00:12:27.955 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.955 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.955 Namespace ID: 1 size: 0GB 00:12:27.955 Initialization complete. 00:12:27.955 INFO: using host memory buffer for IO 00:12:27.955 Hello world! 00:12:27.955 [2024-06-10 10:37:52.195901] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.215 10:37:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:28.215 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.215 [2024-06-10 10:37:52.458679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.600 Initializing NVMe Controllers 00:12:29.600 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.600 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.600 Initialization complete. Launching workers. 00:12:29.600 submit (in ns) avg, min, max = 8961.1, 3921.7, 6993781.7 00:12:29.600 complete (in ns) avg, min, max = 17214.1, 2390.8, 6991569.2 00:12:29.600 00:12:29.600 Submit histogram 00:12:29.600 ================ 00:12:29.600 Range in us Cumulative Count 00:12:29.600 3.920 - 3.947: 0.8273% ( 161) 00:12:29.600 3.947 - 3.973: 5.5085% ( 911) 00:12:29.600 3.973 - 4.000: 14.8656% ( 1821) 00:12:29.600 4.000 - 4.027: 27.2288% ( 2406) 00:12:29.600 4.027 - 4.053: 39.1038% ( 2311) 00:12:29.600 4.053 - 4.080: 54.2110% ( 2940) 00:12:29.600 4.080 - 4.107: 71.0292% ( 3273) 00:12:29.600 4.107 - 4.133: 84.5846% ( 2638) 00:12:29.600 4.133 - 4.160: 93.3765% ( 1711) 00:12:29.600 4.160 - 4.187: 97.2972% ( 763) 00:12:29.600 4.187 - 4.213: 98.7257% ( 278) 00:12:29.600 4.213 - 4.240: 99.2755% ( 107) 00:12:29.600 4.240 - 4.267: 99.3628% ( 17) 00:12:29.600 4.267 - 4.293: 99.3937% ( 6) 00:12:29.600 4.293 - 4.320: 99.3988% ( 1) 00:12:29.600 4.507 - 4.533: 99.4091% ( 2) 00:12:29.600 4.587 - 4.613: 99.4194% ( 2) 00:12:29.600 4.827 - 4.853: 99.4296% ( 2) 00:12:29.600 5.387 - 5.413: 99.4348% ( 1) 00:12:29.600 5.600 - 5.627: 99.4399% ( 1) 00:12:29.600 5.627 - 5.653: 99.4502% ( 2) 00:12:29.600 5.733 - 5.760: 99.4605% ( 2) 00:12:29.600 5.787 - 5.813: 99.4707% ( 2) 00:12:29.600 5.840 - 5.867: 99.4759% ( 1) 00:12:29.600 5.867 - 5.893: 99.4810% ( 1) 00:12:29.600 5.920 - 5.947: 99.4862% ( 1) 00:12:29.600 5.973 - 6.000: 99.4913% ( 1) 00:12:29.600 6.027 - 6.053: 99.4964% ( 1) 00:12:29.600 6.107 - 6.133: 99.5067% ( 2) 00:12:29.600 6.187 - 6.213: 99.5118% ( 1) 00:12:29.600 6.213 - 6.240: 99.5170% ( 1) 00:12:29.600 6.400 - 6.427: 99.5221% ( 1) 00:12:29.600 6.453 - 6.480: 99.5273% ( 1) 00:12:29.600 6.480 - 6.507: 99.5324% ( 1) 00:12:29.600 6.507 - 6.533: 99.5375% ( 1) 00:12:29.600 6.560 - 6.587: 99.5427% ( 1) 00:12:29.600 6.613 - 6.640: 99.5478% ( 1) 00:12:29.600 6.640 - 6.667: 99.5530% ( 1) 00:12:29.600 6.667 - 6.693: 99.5581% ( 1) 00:12:29.600 6.693 - 6.720: 99.5735% ( 3) 00:12:29.600 6.747 - 6.773: 99.5786% ( 1) 00:12:29.600 6.773 - 6.800: 99.5889% ( 2) 00:12:29.600 6.800 - 6.827: 99.5941% ( 1) 00:12:29.600 6.827 - 6.880: 99.5992% ( 1) 00:12:29.600 6.880 - 6.933: 99.6198% ( 4) 00:12:29.600 6.933 - 6.987: 99.6403% ( 4) 00:12:29.600 6.987 - 7.040: 99.6506% ( 2) 00:12:29.600 7.040 - 7.093: 99.6711% ( 4) 00:12:29.600 7.147 - 7.200: 99.6814% ( 2) 00:12:29.600 7.200 - 7.253: 99.6866% ( 1) 00:12:29.600 7.253 - 7.307: 99.6968% ( 2) 00:12:29.600 7.307 - 7.360: 99.7122% ( 3) 00:12:29.600 7.360 - 7.413: 99.7225% ( 2) 00:12:29.600 7.467 - 7.520: 99.7431% ( 4) 00:12:29.600 7.520 - 7.573: 99.7739% ( 6) 00:12:29.600 7.573 - 7.627: 99.7996% ( 5) 00:12:29.600 7.627 - 7.680: 99.8047% ( 1) 00:12:29.600 7.787 - 7.840: 99.8150% ( 2) 00:12:29.600 7.840 - 7.893: 99.8304% ( 3) 00:12:29.601 7.947 - 8.000: 99.8458% ( 3) 00:12:29.601 8.053 - 8.107: 99.8510% ( 1) 00:12:29.601 8.160 - 8.213: 99.8613% ( 2) 00:12:29.601 8.373 - 8.427: 99.8664% ( 1) 00:12:29.601 9.653 - 9.707: 99.8715% ( 1) 00:12:29.601 14.400 - 14.507: 99.8767% ( 1) 00:12:29.601 15.467 - 15.573: 99.8818% ( 1) 00:12:29.601 1017.173 - 1024.000: 99.8870% ( 1) 00:12:29.601 1037.653 - 1044.480: 99.8921% ( 1) 00:12:29.601 3986.773 - 4014.080: 99.9846% ( 18) 00:12:29.601 6990.507 - 7045.120: 100.0000% ( 3) 00:12:29.601 00:12:29.601 Complete histogram 00:12:29.601 ================== 00:12:29.601 Range in us Cumulative Count 00:12:29.601 2.387 - 2.400: 0.6886% ( 134) 00:12:29.601 2.400 - 2.413: 1.0071% ( 62) 00:12:29.601 2.413 - 2.427: 1.1202% ( 22) 00:12:29.601 2.427 - 2.440: 34.8235% ( 6559) 00:12:29.601 2.440 - 2.453: 59.8685% ( 4874) 00:12:29.601 2.453 - 2.467: 68.0386% ( 1590) 00:12:29.601 2.467 - 2.480: 75.1349% ( 1381) 00:12:29.601 2.480 - [2024-06-10 10:37:53.479277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.601 2.493: 80.2271% ( 991) 00:12:29.601 2.493 - 2.507: 82.6319% ( 468) 00:12:29.601 2.507 - 2.520: 89.0550% ( 1250) 00:12:29.601 2.520 - 2.533: 94.5635% ( 1072) 00:12:29.601 2.533 - 2.547: 97.0197% ( 478) 00:12:29.601 2.547 - 2.560: 98.3351% ( 256) 00:12:29.601 2.560 - 2.573: 99.0751% ( 144) 00:12:29.601 2.573 - 2.587: 99.3526% ( 54) 00:12:29.601 2.587 - 2.600: 99.3937% ( 8) 00:12:29.601 2.600 - 2.613: 99.4039% ( 2) 00:12:29.601 2.613 - 2.627: 99.4091% ( 1) 00:12:29.601 4.533 - 4.560: 99.4142% ( 1) 00:12:29.601 4.560 - 4.587: 99.4194% ( 1) 00:12:29.601 4.587 - 4.613: 99.4245% ( 1) 00:12:29.601 4.693 - 4.720: 99.4296% ( 1) 00:12:29.601 4.720 - 4.747: 99.4348% ( 1) 00:12:29.601 4.773 - 4.800: 99.4450% ( 2) 00:12:29.601 4.827 - 4.853: 99.4502% ( 1) 00:12:29.601 5.040 - 5.067: 99.4553% ( 1) 00:12:29.601 5.067 - 5.093: 99.4656% ( 2) 00:12:29.601 5.093 - 5.120: 99.4759% ( 2) 00:12:29.601 5.147 - 5.173: 99.4862% ( 2) 00:12:29.601 5.173 - 5.200: 99.4913% ( 1) 00:12:29.601 5.227 - 5.253: 99.4964% ( 1) 00:12:29.601 5.280 - 5.307: 99.5067% ( 2) 00:12:29.601 5.333 - 5.360: 99.5118% ( 1) 00:12:29.601 5.360 - 5.387: 99.5170% ( 1) 00:12:29.601 5.413 - 5.440: 99.5273% ( 2) 00:12:29.601 5.440 - 5.467: 99.5324% ( 1) 00:12:29.601 5.520 - 5.547: 99.5375% ( 1) 00:12:29.601 5.547 - 5.573: 99.5427% ( 1) 00:12:29.601 5.573 - 5.600: 99.5530% ( 2) 00:12:29.601 5.600 - 5.627: 99.5684% ( 3) 00:12:29.601 5.680 - 5.707: 99.5786% ( 2) 00:12:29.601 5.893 - 5.920: 99.5838% ( 1) 00:12:29.601 6.027 - 6.053: 99.5889% ( 1) 00:12:29.601 6.053 - 6.080: 99.5941% ( 1) 00:12:29.601 6.133 - 6.160: 99.5992% ( 1) 00:12:29.601 6.160 - 6.187: 99.6043% ( 1) 00:12:29.601 6.213 - 6.240: 99.6095% ( 1) 00:12:29.601 6.373 - 6.400: 99.6146% ( 1) 00:12:29.601 13.547 - 13.600: 99.6198% ( 1) 00:12:29.601 14.293 - 14.400: 99.6249% ( 1) 00:12:29.601 44.160 - 44.373: 99.6300% ( 1) 00:12:29.601 174.080 - 174.933: 99.6352% ( 1) 00:12:29.601 1017.173 - 1024.000: 99.6403% ( 1) 00:12:29.601 1037.653 - 1044.480: 99.6454% ( 1) 00:12:29.601 1058.133 - 1064.960: 99.6506% ( 1) 00:12:29.601 3986.773 - 4014.080: 99.9794% ( 64) 00:12:29.601 6963.200 - 6990.507: 99.9897% ( 2) 00:12:29.601 6990.507 - 7045.120: 100.0000% ( 2) 00:12:29.601 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.601 [ 00:12:29.601 { 00:12:29.601 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.601 "subtype": "Discovery", 00:12:29.601 "listen_addresses": [], 00:12:29.601 "allow_any_host": true, 00:12:29.601 "hosts": [] 00:12:29.601 }, 00:12:29.601 { 00:12:29.601 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.601 "subtype": "NVMe", 00:12:29.601 "listen_addresses": [ 00:12:29.601 { 00:12:29.601 "trtype": "VFIOUSER", 00:12:29.601 "adrfam": "IPv4", 00:12:29.601 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.601 "trsvcid": "0" 00:12:29.601 } 00:12:29.601 ], 00:12:29.601 "allow_any_host": true, 00:12:29.601 "hosts": [], 00:12:29.601 "serial_number": "SPDK1", 00:12:29.601 "model_number": "SPDK bdev Controller", 00:12:29.601 "max_namespaces": 32, 00:12:29.601 "min_cntlid": 1, 00:12:29.601 "max_cntlid": 65519, 00:12:29.601 "namespaces": [ 00:12:29.601 { 00:12:29.601 "nsid": 1, 00:12:29.601 "bdev_name": "Malloc1", 00:12:29.601 "name": "Malloc1", 00:12:29.601 "nguid": "491A9BA3043B478DA5F9DF849B8240C5", 00:12:29.601 "uuid": "491a9ba3-043b-478d-a5f9-df849b8240c5" 00:12:29.601 } 00:12:29.601 ] 00:12:29.601 }, 00:12:29.601 { 00:12:29.601 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.601 "subtype": "NVMe", 00:12:29.601 "listen_addresses": [ 00:12:29.601 { 00:12:29.601 "trtype": "VFIOUSER", 00:12:29.601 "adrfam": "IPv4", 00:12:29.601 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.601 "trsvcid": "0" 00:12:29.601 } 00:12:29.601 ], 00:12:29.601 "allow_any_host": true, 00:12:29.601 "hosts": [], 00:12:29.601 "serial_number": "SPDK2", 00:12:29.601 "model_number": "SPDK bdev Controller", 00:12:29.601 "max_namespaces": 32, 00:12:29.601 "min_cntlid": 1, 00:12:29.601 "max_cntlid": 65519, 00:12:29.601 "namespaces": [ 00:12:29.601 { 00:12:29.601 "nsid": 1, 00:12:29.601 "bdev_name": "Malloc2", 00:12:29.601 "name": "Malloc2", 00:12:29.601 "nguid": "C8717AC6B4974EA6AAE03108E110B11B", 00:12:29.601 "uuid": "c8717ac6-b497-4ea6-aae0-3108e110b11b" 00:12:29.601 } 00:12:29.601 ] 00:12:29.601 } 00:12:29.601 ] 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=742485 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:29.601 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.601 Malloc3 00:12:29.601 [2024-06-10 10:37:53.870688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.601 10:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:29.862 [2024-06-10 10:37:54.024739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.862 10:37:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.862 Asynchronous Event Request test 00:12:29.862 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.862 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.862 Registering asynchronous event callbacks... 00:12:29.862 Starting namespace attribute notice tests for all controllers... 00:12:29.862 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:29.862 aer_cb - Changed Namespace 00:12:29.862 Cleaning up... 00:12:30.124 [ 00:12:30.124 { 00:12:30.124 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:30.124 "subtype": "Discovery", 00:12:30.124 "listen_addresses": [], 00:12:30.124 "allow_any_host": true, 00:12:30.124 "hosts": [] 00:12:30.124 }, 00:12:30.124 { 00:12:30.124 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:30.124 "subtype": "NVMe", 00:12:30.124 "listen_addresses": [ 00:12:30.124 { 00:12:30.124 "trtype": "VFIOUSER", 00:12:30.124 "adrfam": "IPv4", 00:12:30.124 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:30.124 "trsvcid": "0" 00:12:30.124 } 00:12:30.124 ], 00:12:30.124 "allow_any_host": true, 00:12:30.124 "hosts": [], 00:12:30.124 "serial_number": "SPDK1", 00:12:30.124 "model_number": "SPDK bdev Controller", 00:12:30.124 "max_namespaces": 32, 00:12:30.124 "min_cntlid": 1, 00:12:30.124 "max_cntlid": 65519, 00:12:30.124 "namespaces": [ 00:12:30.124 { 00:12:30.124 "nsid": 1, 00:12:30.124 "bdev_name": "Malloc1", 00:12:30.124 "name": "Malloc1", 00:12:30.124 "nguid": "491A9BA3043B478DA5F9DF849B8240C5", 00:12:30.124 "uuid": "491a9ba3-043b-478d-a5f9-df849b8240c5" 00:12:30.124 }, 00:12:30.124 { 00:12:30.124 "nsid": 2, 00:12:30.124 "bdev_name": "Malloc3", 00:12:30.124 "name": "Malloc3", 00:12:30.124 "nguid": "BEBCA611AB8E45169F8638B002DB58D5", 00:12:30.124 "uuid": "bebca611-ab8e-4516-9f86-38b002db58d5" 00:12:30.124 } 00:12:30.124 ] 00:12:30.124 }, 00:12:30.124 { 00:12:30.124 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:30.124 "subtype": "NVMe", 00:12:30.124 "listen_addresses": [ 00:12:30.124 { 00:12:30.124 "trtype": "VFIOUSER", 00:12:30.124 "adrfam": "IPv4", 00:12:30.124 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:30.124 "trsvcid": "0" 00:12:30.124 } 00:12:30.124 ], 00:12:30.124 "allow_any_host": true, 00:12:30.124 "hosts": [], 00:12:30.124 "serial_number": "SPDK2", 00:12:30.124 "model_number": "SPDK bdev Controller", 00:12:30.125 "max_namespaces": 32, 00:12:30.125 "min_cntlid": 1, 00:12:30.125 "max_cntlid": 65519, 00:12:30.125 "namespaces": [ 00:12:30.125 { 00:12:30.125 "nsid": 1, 00:12:30.125 "bdev_name": "Malloc2", 00:12:30.125 "name": "Malloc2", 00:12:30.125 "nguid": "C8717AC6B4974EA6AAE03108E110B11B", 00:12:30.125 "uuid": "c8717ac6-b497-4ea6-aae0-3108e110b11b" 00:12:30.125 } 00:12:30.125 ] 00:12:30.125 } 00:12:30.125 ] 00:12:30.125 10:37:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 742485 00:12:30.125 10:37:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:30.125 10:37:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:30.125 10:37:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:30.125 10:37:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:30.125 [2024-06-10 10:37:54.232745] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:12:30.125 [2024-06-10 10:37:54.232786] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742497 ] 00:12:30.125 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.125 [2024-06-10 10:37:54.263789] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:30.125 [2024-06-10 10:37:54.272468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:30.125 [2024-06-10 10:37:54.272491] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f84b7fca000 00:12:30.125 [2024-06-10 10:37:54.273467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.125 [2024-06-10 10:37:54.274474] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.125 [2024-06-10 10:37:54.275478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.125 [2024-06-10 10:37:54.276485] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:30.125 [2024-06-10 10:37:54.277493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:30.125 [2024-06-10 10:37:54.278500] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.125 [2024-06-10 10:37:54.279510] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:30.125 [2024-06-10 10:37:54.280510] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:30.125 [2024-06-10 10:37:54.281519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:30.125 [2024-06-10 10:37:54.281533] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f84b7fbf000 00:12:30.125 [2024-06-10 10:37:54.282858] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:30.125 [2024-06-10 10:37:54.303399] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:30.125 [2024-06-10 10:37:54.303422] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:30.125 [2024-06-10 10:37:54.305469] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:30.125 [2024-06-10 10:37:54.305514] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:30.125 [2024-06-10 10:37:54.305596] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:30.125 [2024-06-10 10:37:54.305612] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:30.125 [2024-06-10 10:37:54.305617] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:30.125 [2024-06-10 10:37:54.306472] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:30.125 [2024-06-10 10:37:54.306482] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:30.125 [2024-06-10 10:37:54.306489] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:30.125 [2024-06-10 10:37:54.307476] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:30.125 [2024-06-10 10:37:54.307484] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:30.125 [2024-06-10 10:37:54.307491] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:30.125 [2024-06-10 10:37:54.308488] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:30.125 [2024-06-10 10:37:54.308497] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:30.125 [2024-06-10 10:37:54.309490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:30.125 [2024-06-10 10:37:54.309499] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:30.125 [2024-06-10 10:37:54.309504] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:30.125 [2024-06-10 10:37:54.309510] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:30.125 [2024-06-10 10:37:54.309616] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:30.125 [2024-06-10 10:37:54.309621] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:30.125 [2024-06-10 10:37:54.309626] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:30.125 [2024-06-10 10:37:54.310497] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:30.125 [2024-06-10 10:37:54.311503] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:30.125 [2024-06-10 10:37:54.312513] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:30.125 [2024-06-10 10:37:54.313513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:30.125 [2024-06-10 10:37:54.313553] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:30.125 [2024-06-10 10:37:54.314525] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:30.125 [2024-06-10 10:37:54.314534] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:30.125 [2024-06-10 10:37:54.314539] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:30.125 [2024-06-10 10:37:54.314560] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:30.125 [2024-06-10 10:37:54.314567] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:30.125 [2024-06-10 10:37:54.314581] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.125 [2024-06-10 10:37:54.314586] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.125 [2024-06-10 10:37:54.314599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.125 [2024-06-10 10:37:54.321251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:30.125 [2024-06-10 10:37:54.321263] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:30.125 [2024-06-10 10:37:54.321268] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:30.125 [2024-06-10 10:37:54.321272] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:30.125 [2024-06-10 10:37:54.321279] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:30.125 [2024-06-10 10:37:54.321285] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:30.125 [2024-06-10 10:37:54.321289] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:30.125 [2024-06-10 10:37:54.321294] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:30.125 [2024-06-10 10:37:54.321302] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:30.125 [2024-06-10 10:37:54.321312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:30.125 [2024-06-10 10:37:54.329248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:30.125 [2024-06-10 10:37:54.329260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.125 [2024-06-10 10:37:54.329269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.125 [2024-06-10 10:37:54.329277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.125 [2024-06-10 10:37:54.329285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:30.126 [2024-06-10 10:37:54.329292] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.329301] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.329310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.337247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.337254] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:30.126 [2024-06-10 10:37:54.337259] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.337266] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.337272] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.337280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.345249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.345302] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.345310] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.345318] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:30.126 [2024-06-10 10:37:54.345322] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:30.126 [2024-06-10 10:37:54.345328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.353247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.353258] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:30.126 [2024-06-10 10:37:54.353270] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.353278] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.353285] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.126 [2024-06-10 10:37:54.353289] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.126 [2024-06-10 10:37:54.353295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.361249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.361263] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.361271] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.361282] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:30.126 [2024-06-10 10:37:54.361286] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.126 [2024-06-10 10:37:54.361292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.369247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.369256] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.369263] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.369271] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.369277] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.369282] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.369287] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:30.126 [2024-06-10 10:37:54.369291] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:30.126 [2024-06-10 10:37:54.369296] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:30.126 [2024-06-10 10:37:54.369314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.377247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.377260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.385248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.385271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.393250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.393263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.401247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.401260] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:30.126 [2024-06-10 10:37:54.401265] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:30.126 [2024-06-10 10:37:54.401268] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:30.126 [2024-06-10 10:37:54.401272] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:30.126 [2024-06-10 10:37:54.401279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:30.126 [2024-06-10 10:37:54.401286] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:30.126 [2024-06-10 10:37:54.401291] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:30.126 [2024-06-10 10:37:54.401299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.401306] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:30.126 [2024-06-10 10:37:54.401311] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:30.126 [2024-06-10 10:37:54.401316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.401324] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:30.126 [2024-06-10 10:37:54.401328] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:30.126 [2024-06-10 10:37:54.401334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:30.126 [2024-06-10 10:37:54.409250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.409264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.409273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:30.126 [2024-06-10 10:37:54.409282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:30.126 ===================================================== 00:12:30.126 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:30.126 ===================================================== 00:12:30.126 Controller Capabilities/Features 00:12:30.126 ================================ 00:12:30.126 Vendor ID: 4e58 00:12:30.126 Subsystem Vendor ID: 4e58 00:12:30.126 Serial Number: SPDK2 00:12:30.126 Model Number: SPDK bdev Controller 00:12:30.126 Firmware Version: 24.09 00:12:30.126 Recommended Arb Burst: 6 00:12:30.126 IEEE OUI Identifier: 8d 6b 50 00:12:30.126 Multi-path I/O 00:12:30.126 May have multiple subsystem ports: Yes 00:12:30.126 May have multiple controllers: Yes 00:12:30.126 Associated with SR-IOV VF: No 00:12:30.126 Max Data Transfer Size: 131072 00:12:30.126 Max Number of Namespaces: 32 00:12:30.126 Max Number of I/O Queues: 127 00:12:30.126 NVMe Specification Version (VS): 1.3 00:12:30.126 NVMe Specification Version (Identify): 1.3 00:12:30.126 Maximum Queue Entries: 256 00:12:30.126 Contiguous Queues Required: Yes 00:12:30.126 Arbitration Mechanisms Supported 00:12:30.126 Weighted Round Robin: Not Supported 00:12:30.126 Vendor Specific: Not Supported 00:12:30.126 Reset Timeout: 15000 ms 00:12:30.126 Doorbell Stride: 4 bytes 00:12:30.126 NVM Subsystem Reset: Not Supported 00:12:30.126 Command Sets Supported 00:12:30.126 NVM Command Set: Supported 00:12:30.126 Boot Partition: Not Supported 00:12:30.126 Memory Page Size Minimum: 4096 bytes 00:12:30.126 Memory Page Size Maximum: 4096 bytes 00:12:30.126 Persistent Memory Region: Not Supported 00:12:30.126 Optional Asynchronous Events Supported 00:12:30.126 Namespace Attribute Notices: Supported 00:12:30.126 Firmware Activation Notices: Not Supported 00:12:30.127 ANA Change Notices: Not Supported 00:12:30.127 PLE Aggregate Log Change Notices: Not Supported 00:12:30.127 LBA Status Info Alert Notices: Not Supported 00:12:30.127 EGE Aggregate Log Change Notices: Not Supported 00:12:30.127 Normal NVM Subsystem Shutdown event: Not Supported 00:12:30.127 Zone Descriptor Change Notices: Not Supported 00:12:30.127 Discovery Log Change Notices: Not Supported 00:12:30.127 Controller Attributes 00:12:30.127 128-bit Host Identifier: Supported 00:12:30.127 Non-Operational Permissive Mode: Not Supported 00:12:30.127 NVM Sets: Not Supported 00:12:30.127 Read Recovery Levels: Not Supported 00:12:30.127 Endurance Groups: Not Supported 00:12:30.127 Predictable Latency Mode: Not Supported 00:12:30.127 Traffic Based Keep ALive: Not Supported 00:12:30.127 Namespace Granularity: Not Supported 00:12:30.127 SQ Associations: Not Supported 00:12:30.127 UUID List: Not Supported 00:12:30.127 Multi-Domain Subsystem: Not Supported 00:12:30.127 Fixed Capacity Management: Not Supported 00:12:30.127 Variable Capacity Management: Not Supported 00:12:30.127 Delete Endurance Group: Not Supported 00:12:30.127 Delete NVM Set: Not Supported 00:12:30.127 Extended LBA Formats Supported: Not Supported 00:12:30.127 Flexible Data Placement Supported: Not Supported 00:12:30.127 00:12:30.127 Controller Memory Buffer Support 00:12:30.127 ================================ 00:12:30.127 Supported: No 00:12:30.127 00:12:30.127 Persistent Memory Region Support 00:12:30.127 ================================ 00:12:30.127 Supported: No 00:12:30.127 00:12:30.127 Admin Command Set Attributes 00:12:30.127 ============================ 00:12:30.127 Security Send/Receive: Not Supported 00:12:30.127 Format NVM: Not Supported 00:12:30.127 Firmware Activate/Download: Not Supported 00:12:30.127 Namespace Management: Not Supported 00:12:30.127 Device Self-Test: Not Supported 00:12:30.127 Directives: Not Supported 00:12:30.127 NVMe-MI: Not Supported 00:12:30.127 Virtualization Management: Not Supported 00:12:30.127 Doorbell Buffer Config: Not Supported 00:12:30.127 Get LBA Status Capability: Not Supported 00:12:30.127 Command & Feature Lockdown Capability: Not Supported 00:12:30.127 Abort Command Limit: 4 00:12:30.127 Async Event Request Limit: 4 00:12:30.127 Number of Firmware Slots: N/A 00:12:30.127 Firmware Slot 1 Read-Only: N/A 00:12:30.127 Firmware Activation Without Reset: N/A 00:12:30.127 Multiple Update Detection Support: N/A 00:12:30.127 Firmware Update Granularity: No Information Provided 00:12:30.127 Per-Namespace SMART Log: No 00:12:30.127 Asymmetric Namespace Access Log Page: Not Supported 00:12:30.127 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:30.127 Command Effects Log Page: Supported 00:12:30.127 Get Log Page Extended Data: Supported 00:12:30.127 Telemetry Log Pages: Not Supported 00:12:30.127 Persistent Event Log Pages: Not Supported 00:12:30.127 Supported Log Pages Log Page: May Support 00:12:30.127 Commands Supported & Effects Log Page: Not Supported 00:12:30.127 Feature Identifiers & Effects Log Page:May Support 00:12:30.127 NVMe-MI Commands & Effects Log Page: May Support 00:12:30.127 Data Area 4 for Telemetry Log: Not Supported 00:12:30.127 Error Log Page Entries Supported: 128 00:12:30.127 Keep Alive: Supported 00:12:30.127 Keep Alive Granularity: 10000 ms 00:12:30.127 00:12:30.127 NVM Command Set Attributes 00:12:30.127 ========================== 00:12:30.127 Submission Queue Entry Size 00:12:30.127 Max: 64 00:12:30.127 Min: 64 00:12:30.127 Completion Queue Entry Size 00:12:30.127 Max: 16 00:12:30.127 Min: 16 00:12:30.127 Number of Namespaces: 32 00:12:30.127 Compare Command: Supported 00:12:30.127 Write Uncorrectable Command: Not Supported 00:12:30.127 Dataset Management Command: Supported 00:12:30.127 Write Zeroes Command: Supported 00:12:30.127 Set Features Save Field: Not Supported 00:12:30.127 Reservations: Not Supported 00:12:30.127 Timestamp: Not Supported 00:12:30.127 Copy: Supported 00:12:30.127 Volatile Write Cache: Present 00:12:30.127 Atomic Write Unit (Normal): 1 00:12:30.127 Atomic Write Unit (PFail): 1 00:12:30.127 Atomic Compare & Write Unit: 1 00:12:30.127 Fused Compare & Write: Supported 00:12:30.127 Scatter-Gather List 00:12:30.127 SGL Command Set: Supported (Dword aligned) 00:12:30.127 SGL Keyed: Not Supported 00:12:30.127 SGL Bit Bucket Descriptor: Not Supported 00:12:30.127 SGL Metadata Pointer: Not Supported 00:12:30.127 Oversized SGL: Not Supported 00:12:30.127 SGL Metadata Address: Not Supported 00:12:30.127 SGL Offset: Not Supported 00:12:30.127 Transport SGL Data Block: Not Supported 00:12:30.127 Replay Protected Memory Block: Not Supported 00:12:30.127 00:12:30.127 Firmware Slot Information 00:12:30.127 ========================= 00:12:30.127 Active slot: 1 00:12:30.127 Slot 1 Firmware Revision: 24.09 00:12:30.127 00:12:30.127 00:12:30.127 Commands Supported and Effects 00:12:30.127 ============================== 00:12:30.127 Admin Commands 00:12:30.127 -------------- 00:12:30.127 Get Log Page (02h): Supported 00:12:30.127 Identify (06h): Supported 00:12:30.127 Abort (08h): Supported 00:12:30.127 Set Features (09h): Supported 00:12:30.127 Get Features (0Ah): Supported 00:12:30.127 Asynchronous Event Request (0Ch): Supported 00:12:30.127 Keep Alive (18h): Supported 00:12:30.127 I/O Commands 00:12:30.127 ------------ 00:12:30.127 Flush (00h): Supported LBA-Change 00:12:30.127 Write (01h): Supported LBA-Change 00:12:30.127 Read (02h): Supported 00:12:30.127 Compare (05h): Supported 00:12:30.127 Write Zeroes (08h): Supported LBA-Change 00:12:30.127 Dataset Management (09h): Supported LBA-Change 00:12:30.127 Copy (19h): Supported LBA-Change 00:12:30.127 Unknown (79h): Supported LBA-Change 00:12:30.127 Unknown (7Ah): Supported 00:12:30.127 00:12:30.127 Error Log 00:12:30.127 ========= 00:12:30.127 00:12:30.127 Arbitration 00:12:30.127 =========== 00:12:30.127 Arbitration Burst: 1 00:12:30.127 00:12:30.127 Power Management 00:12:30.127 ================ 00:12:30.127 Number of Power States: 1 00:12:30.127 Current Power State: Power State #0 00:12:30.127 Power State #0: 00:12:30.127 Max Power: 0.00 W 00:12:30.127 Non-Operational State: Operational 00:12:30.127 Entry Latency: Not Reported 00:12:30.127 Exit Latency: Not Reported 00:12:30.127 Relative Read Throughput: 0 00:12:30.127 Relative Read Latency: 0 00:12:30.127 Relative Write Throughput: 0 00:12:30.127 Relative Write Latency: 0 00:12:30.127 Idle Power: Not Reported 00:12:30.127 Active Power: Not Reported 00:12:30.127 Non-Operational Permissive Mode: Not Supported 00:12:30.127 00:12:30.127 Health Information 00:12:30.127 ================== 00:12:30.127 Critical Warnings: 00:12:30.127 Available Spare Space: OK 00:12:30.127 Temperature: OK 00:12:30.127 Device Reliability: OK 00:12:30.127 Read Only: No 00:12:30.127 Volatile Memory Backup: OK 00:12:30.127 Current Temperature: 0 Kelvin (-2[2024-06-10 10:37:54.409383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:30.388 [2024-06-10 10:37:54.417248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:30.388 [2024-06-10 10:37:54.417276] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:30.388 [2024-06-10 10:37:54.417285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.388 [2024-06-10 10:37:54.417291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.388 [2024-06-10 10:37:54.417298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.388 [2024-06-10 10:37:54.417304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:30.388 [2024-06-10 10:37:54.421248] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:30.388 [2024-06-10 10:37:54.421260] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:30.388 [2024-06-10 10:37:54.421369] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:30.389 [2024-06-10 10:37:54.421416] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:30.389 [2024-06-10 10:37:54.421423] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:30.389 [2024-06-10 10:37:54.422370] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:30.389 [2024-06-10 10:37:54.422382] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:30.389 [2024-06-10 10:37:54.422432] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:30.389 [2024-06-10 10:37:54.424128] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:30.389 73 Celsius) 00:12:30.389 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:30.389 Available Spare: 0% 00:12:30.389 Available Spare Threshold: 0% 00:12:30.389 Life Percentage Used: 0% 00:12:30.389 Data Units Read: 0 00:12:30.389 Data Units Written: 0 00:12:30.389 Host Read Commands: 0 00:12:30.389 Host Write Commands: 0 00:12:30.389 Controller Busy Time: 0 minutes 00:12:30.389 Power Cycles: 0 00:12:30.389 Power On Hours: 0 hours 00:12:30.389 Unsafe Shutdowns: 0 00:12:30.389 Unrecoverable Media Errors: 0 00:12:30.389 Lifetime Error Log Entries: 0 00:12:30.389 Warning Temperature Time: 0 minutes 00:12:30.389 Critical Temperature Time: 0 minutes 00:12:30.389 00:12:30.389 Number of Queues 00:12:30.389 ================ 00:12:30.389 Number of I/O Submission Queues: 127 00:12:30.389 Number of I/O Completion Queues: 127 00:12:30.389 00:12:30.389 Active Namespaces 00:12:30.389 ================= 00:12:30.389 Namespace ID:1 00:12:30.389 Error Recovery Timeout: Unlimited 00:12:30.389 Command Set Identifier: NVM (00h) 00:12:30.389 Deallocate: Supported 00:12:30.389 Deallocated/Unwritten Error: Not Supported 00:12:30.389 Deallocated Read Value: Unknown 00:12:30.389 Deallocate in Write Zeroes: Not Supported 00:12:30.389 Deallocated Guard Field: 0xFFFF 00:12:30.389 Flush: Supported 00:12:30.389 Reservation: Supported 00:12:30.389 Namespace Sharing Capabilities: Multiple Controllers 00:12:30.389 Size (in LBAs): 131072 (0GiB) 00:12:30.389 Capacity (in LBAs): 131072 (0GiB) 00:12:30.389 Utilization (in LBAs): 131072 (0GiB) 00:12:30.389 NGUID: C8717AC6B4974EA6AAE03108E110B11B 00:12:30.389 UUID: c8717ac6-b497-4ea6-aae0-3108e110b11b 00:12:30.389 Thin Provisioning: Not Supported 00:12:30.389 Per-NS Atomic Units: Yes 00:12:30.389 Atomic Boundary Size (Normal): 0 00:12:30.389 Atomic Boundary Size (PFail): 0 00:12:30.389 Atomic Boundary Offset: 0 00:12:30.389 Maximum Single Source Range Length: 65535 00:12:30.389 Maximum Copy Length: 65535 00:12:30.389 Maximum Source Range Count: 1 00:12:30.389 NGUID/EUI64 Never Reused: No 00:12:30.389 Namespace Write Protected: No 00:12:30.389 Number of LBA Formats: 1 00:12:30.389 Current LBA Format: LBA Format #00 00:12:30.389 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:30.389 00:12:30.389 10:37:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:30.389 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.389 [2024-06-10 10:37:54.614623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:35.676 Initializing NVMe Controllers 00:12:35.676 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:35.676 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:35.676 Initialization complete. Launching workers. 00:12:35.676 ======================================================== 00:12:35.676 Latency(us) 00:12:35.676 Device Information : IOPS MiB/s Average min max 00:12:35.676 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39967.00 156.12 3205.04 832.00 7798.67 00:12:35.676 ======================================================== 00:12:35.676 Total : 39967.00 156.12 3205.04 832.00 7798.67 00:12:35.676 00:12:35.676 [2024-06-10 10:37:59.723427] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:35.676 10:37:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:35.676 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.676 [2024-06-10 10:37:59.901985] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:41.051 Initializing NVMe Controllers 00:12:41.051 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.051 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:41.051 Initialization complete. Launching workers. 00:12:41.051 ======================================================== 00:12:41.051 Latency(us) 00:12:41.051 Device Information : IOPS MiB/s Average min max 00:12:41.051 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35601.58 139.07 3594.69 1098.98 7710.44 00:12:41.051 ======================================================== 00:12:41.051 Total : 35601.58 139.07 3594.69 1098.98 7710.44 00:12:41.051 00:12:41.051 [2024-06-10 10:38:04.919385] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.051 10:38:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:41.051 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.051 [2024-06-10 10:38:05.103632] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.337 [2024-06-10 10:38:10.243322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.337 Initializing NVMe Controllers 00:12:46.337 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.337 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.337 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:46.337 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:46.337 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:46.337 Initialization complete. Launching workers. 00:12:46.337 Starting thread on core 2 00:12:46.337 Starting thread on core 3 00:12:46.337 Starting thread on core 1 00:12:46.337 10:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:46.337 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.337 [2024-06-10 10:38:10.508693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.635 [2024-06-10 10:38:13.594000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.635 Initializing NVMe Controllers 00:12:49.635 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.635 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.635 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:49.635 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:49.635 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:49.635 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:49.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:49.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:49.635 Initialization complete. Launching workers. 00:12:49.635 Starting thread on core 1 with urgent priority queue 00:12:49.635 Starting thread on core 2 with urgent priority queue 00:12:49.635 Starting thread on core 3 with urgent priority queue 00:12:49.635 Starting thread on core 0 with urgent priority queue 00:12:49.635 SPDK bdev Controller (SPDK2 ) core 0: 8200.00 IO/s 12.20 secs/100000 ios 00:12:49.635 SPDK bdev Controller (SPDK2 ) core 1: 8075.33 IO/s 12.38 secs/100000 ios 00:12:49.635 SPDK bdev Controller (SPDK2 ) core 2: 8082.00 IO/s 12.37 secs/100000 ios 00:12:49.635 SPDK bdev Controller (SPDK2 ) core 3: 10742.00 IO/s 9.31 secs/100000 ios 00:12:49.635 ======================================================== 00:12:49.635 00:12:49.635 10:38:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.635 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.635 [2024-06-10 10:38:13.852656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.635 Initializing NVMe Controllers 00:12:49.635 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.635 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.635 Namespace ID: 1 size: 0GB 00:12:49.635 Initialization complete. 00:12:49.635 INFO: using host memory buffer for IO 00:12:49.635 Hello world! 00:12:49.635 [2024-06-10 10:38:13.865729] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.635 10:38:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.896 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.896 [2024-06-10 10:38:14.122498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.281 Initializing NVMe Controllers 00:12:51.281 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.281 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.281 Initialization complete. Launching workers. 00:12:51.281 submit (in ns) avg, min, max = 8825.3, 3952.5, 4000225.8 00:12:51.281 complete (in ns) avg, min, max = 15183.9, 2385.8, 3998244.2 00:12:51.281 00:12:51.281 Submit histogram 00:12:51.281 ================ 00:12:51.281 Range in us Cumulative Count 00:12:51.281 3.947 - 3.973: 0.4920% ( 96) 00:12:51.281 3.973 - 4.000: 5.1655% ( 912) 00:12:51.281 4.000 - 4.027: 12.1451% ( 1362) 00:12:51.281 4.027 - 4.053: 23.6599% ( 2247) 00:12:51.281 4.053 - 4.080: 35.2209% ( 2256) 00:12:51.281 4.080 - 4.107: 48.2474% ( 2542) 00:12:51.281 4.107 - 4.133: 65.1942% ( 3307) 00:12:51.281 4.133 - 4.160: 79.0509% ( 2704) 00:12:51.281 4.160 - 4.187: 89.5870% ( 2056) 00:12:51.281 4.187 - 4.213: 95.5519% ( 1164) 00:12:51.281 4.213 - 4.240: 98.2218% ( 521) 00:12:51.281 4.240 - 4.267: 99.0930% ( 170) 00:12:51.281 4.267 - 4.293: 99.3543% ( 51) 00:12:51.281 4.293 - 4.320: 99.4056% ( 10) 00:12:51.281 4.320 - 4.347: 99.4414% ( 7) 00:12:51.281 4.373 - 4.400: 99.4466% ( 1) 00:12:51.281 4.533 - 4.560: 99.4517% ( 1) 00:12:51.281 4.693 - 4.720: 99.4568% ( 1) 00:12:51.281 5.280 - 5.307: 99.4619% ( 1) 00:12:51.281 5.360 - 5.387: 99.4670% ( 1) 00:12:51.281 5.413 - 5.440: 99.4722% ( 1) 00:12:51.281 5.440 - 5.467: 99.4773% ( 1) 00:12:51.281 6.000 - 6.027: 99.4824% ( 1) 00:12:51.281 6.080 - 6.107: 99.4927% ( 2) 00:12:51.281 6.107 - 6.133: 99.5080% ( 3) 00:12:51.281 6.133 - 6.160: 99.5234% ( 3) 00:12:51.281 6.160 - 6.187: 99.5285% ( 1) 00:12:51.281 6.187 - 6.213: 99.5337% ( 1) 00:12:51.281 6.213 - 6.240: 99.5439% ( 2) 00:12:51.281 6.240 - 6.267: 99.5593% ( 3) 00:12:51.281 6.293 - 6.320: 99.5695% ( 2) 00:12:51.281 6.373 - 6.400: 99.5798% ( 2) 00:12:51.281 6.400 - 6.427: 99.5952% ( 3) 00:12:51.281 6.480 - 6.507: 99.6003% ( 1) 00:12:51.281 6.507 - 6.533: 99.6105% ( 2) 00:12:51.281 6.560 - 6.587: 99.6310% ( 4) 00:12:51.281 6.587 - 6.613: 99.6464% ( 3) 00:12:51.281 6.613 - 6.640: 99.6567% ( 2) 00:12:51.281 6.667 - 6.693: 99.6618% ( 1) 00:12:51.281 6.693 - 6.720: 99.6874% ( 5) 00:12:51.281 6.720 - 6.747: 99.7028% ( 3) 00:12:51.281 6.747 - 6.773: 99.7182% ( 3) 00:12:51.281 6.773 - 6.800: 99.7284% ( 2) 00:12:51.281 6.800 - 6.827: 99.7335% ( 1) 00:12:51.281 6.880 - 6.933: 99.7489% ( 3) 00:12:51.281 6.933 - 6.987: 99.7591% ( 2) 00:12:51.281 6.987 - 7.040: 99.7694% ( 2) 00:12:51.281 7.040 - 7.093: 99.7848% ( 3) 00:12:51.281 7.093 - 7.147: 99.7950% ( 2) 00:12:51.281 7.147 - 7.200: 99.8104% ( 3) 00:12:51.281 7.200 - 7.253: 99.8155% ( 1) 00:12:51.281 7.253 - 7.307: 99.8258% ( 2) 00:12:51.281 7.307 - 7.360: 99.8360% ( 2) 00:12:51.281 7.360 - 7.413: 99.8411% ( 1) 00:12:51.281 7.467 - 7.520: 99.8514% ( 2) 00:12:51.281 7.680 - 7.733: 99.8565% ( 1) 00:12:51.281 7.787 - 7.840: 99.8616% ( 1) 00:12:51.281 10.560 - 10.613: 99.8668% ( 1) 00:12:51.281 11.040 - 11.093: 99.8719% ( 1) 00:12:51.281 13.173 - 13.227: 99.8770% ( 1) 00:12:51.281 13.973 - 14.080: 99.8821% ( 1) 00:12:51.281 3986.773 - 4014.080: 100.0000% ( 23) 00:12:51.281 00:12:51.281 Complete histogram 00:12:51.281 ================== 00:12:51.281 Range in us Cumulative Count 00:12:51.281 2.373 - 2.387: 0.0051% ( 1) 00:12:51.281 2.387 - 2.400: 0.0307% ( 5) 00:12:51.281 2.400 - 2.413: 1.1120% ( 211) 00:12:51.281 2.413 - 2.427: 1.2248% ( 22) 00:12:51.281 2.427 - 2.440: 1.4297% ( 40) 00:12:51.281 2.440 - 2.453: 1.5271% ( 19) 00:12:51.281 2.453 - 2.467: 47.4839% ( 8968) 00:12:51.281 2.467 - 2.480: 61.8172% ( 2797) 00:12:51.281 2.480 - 2.493: 72.0713% ( 2001) 00:12:51.281 2.493 - 2.507: 77.8672% ( 1131) 00:12:51.281 2.507 - 2.520: 81.1827% ( 647) 00:12:51.281 2.520 - 2.533: 84.4624% ( 640) 00:12:51.281 2.533 - 2.547: 90.2275% ( 1125) 00:12:51.282 2.547 - 2.560: 95.0343% ( 938) 00:12:51.282 2.560 - 2.573: 97.2328% ( 429) 00:12:51.282 2.573 - [2024-06-10 10:38:15.224920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.282 2.587: 98.5498% ( 257) 00:12:51.282 2.587 - 2.600: 99.1545% ( 118) 00:12:51.282 2.600 - 2.613: 99.3492% ( 38) 00:12:51.282 2.613 - 2.627: 99.3902% ( 8) 00:12:51.282 2.627 - 2.640: 99.4056% ( 3) 00:12:51.282 2.640 - 2.653: 99.4158% ( 2) 00:12:51.282 2.653 - 2.667: 99.4209% ( 1) 00:12:51.282 4.507 - 4.533: 99.4261% ( 1) 00:12:51.282 4.560 - 4.587: 99.4414% ( 3) 00:12:51.282 4.667 - 4.693: 99.4466% ( 1) 00:12:51.282 4.827 - 4.853: 99.4517% ( 1) 00:12:51.282 4.880 - 4.907: 99.4568% ( 1) 00:12:51.282 4.960 - 4.987: 99.4722% ( 3) 00:12:51.282 4.987 - 5.013: 99.4927% ( 4) 00:12:51.282 5.040 - 5.067: 99.5080% ( 3) 00:12:51.282 5.093 - 5.120: 99.5183% ( 2) 00:12:51.282 5.120 - 5.147: 99.5234% ( 1) 00:12:51.282 5.173 - 5.200: 99.5285% ( 1) 00:12:51.282 5.227 - 5.253: 99.5388% ( 2) 00:12:51.282 5.253 - 5.280: 99.5490% ( 2) 00:12:51.282 5.280 - 5.307: 99.5644% ( 3) 00:12:51.282 5.307 - 5.333: 99.5747% ( 2) 00:12:51.282 5.360 - 5.387: 99.5798% ( 1) 00:12:51.282 5.387 - 5.413: 99.5900% ( 2) 00:12:51.282 5.573 - 5.600: 99.5952% ( 1) 00:12:51.282 5.600 - 5.627: 99.6003% ( 1) 00:12:51.282 5.627 - 5.653: 99.6054% ( 1) 00:12:51.282 5.733 - 5.760: 99.6105% ( 1) 00:12:51.282 5.813 - 5.840: 99.6157% ( 1) 00:12:51.282 5.867 - 5.893: 99.6208% ( 1) 00:12:51.282 6.027 - 6.053: 99.6259% ( 1) 00:12:51.282 6.533 - 6.560: 99.6310% ( 1) 00:12:51.282 7.307 - 7.360: 99.6362% ( 1) 00:12:51.282 7.627 - 7.680: 99.6413% ( 1) 00:12:51.282 8.320 - 8.373: 99.6464% ( 1) 00:12:51.282 10.667 - 10.720: 99.6515% ( 1) 00:12:51.282 11.413 - 11.467: 99.6567% ( 1) 00:12:51.282 13.653 - 13.760: 99.6618% ( 1) 00:12:51.282 24.107 - 24.213: 99.6669% ( 1) 00:12:51.282 43.947 - 44.160: 99.6720% ( 1) 00:12:51.282 47.787 - 48.000: 99.6772% ( 1) 00:12:51.282 1897.813 - 1911.467: 99.6823% ( 1) 00:12:51.282 1993.387 - 2007.040: 99.6874% ( 1) 00:12:51.282 3986.773 - 4014.080: 100.0000% ( 61) 00:12:51.282 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.282 [ 00:12:51.282 { 00:12:51.282 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.282 "subtype": "Discovery", 00:12:51.282 "listen_addresses": [], 00:12:51.282 "allow_any_host": true, 00:12:51.282 "hosts": [] 00:12:51.282 }, 00:12:51.282 { 00:12:51.282 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.282 "subtype": "NVMe", 00:12:51.282 "listen_addresses": [ 00:12:51.282 { 00:12:51.282 "trtype": "VFIOUSER", 00:12:51.282 "adrfam": "IPv4", 00:12:51.282 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.282 "trsvcid": "0" 00:12:51.282 } 00:12:51.282 ], 00:12:51.282 "allow_any_host": true, 00:12:51.282 "hosts": [], 00:12:51.282 "serial_number": "SPDK1", 00:12:51.282 "model_number": "SPDK bdev Controller", 00:12:51.282 "max_namespaces": 32, 00:12:51.282 "min_cntlid": 1, 00:12:51.282 "max_cntlid": 65519, 00:12:51.282 "namespaces": [ 00:12:51.282 { 00:12:51.282 "nsid": 1, 00:12:51.282 "bdev_name": "Malloc1", 00:12:51.282 "name": "Malloc1", 00:12:51.282 "nguid": "491A9BA3043B478DA5F9DF849B8240C5", 00:12:51.282 "uuid": "491a9ba3-043b-478d-a5f9-df849b8240c5" 00:12:51.282 }, 00:12:51.282 { 00:12:51.282 "nsid": 2, 00:12:51.282 "bdev_name": "Malloc3", 00:12:51.282 "name": "Malloc3", 00:12:51.282 "nguid": "BEBCA611AB8E45169F8638B002DB58D5", 00:12:51.282 "uuid": "bebca611-ab8e-4516-9f86-38b002db58d5" 00:12:51.282 } 00:12:51.282 ] 00:12:51.282 }, 00:12:51.282 { 00:12:51.282 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.282 "subtype": "NVMe", 00:12:51.282 "listen_addresses": [ 00:12:51.282 { 00:12:51.282 "trtype": "VFIOUSER", 00:12:51.282 "adrfam": "IPv4", 00:12:51.282 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.282 "trsvcid": "0" 00:12:51.282 } 00:12:51.282 ], 00:12:51.282 "allow_any_host": true, 00:12:51.282 "hosts": [], 00:12:51.282 "serial_number": "SPDK2", 00:12:51.282 "model_number": "SPDK bdev Controller", 00:12:51.282 "max_namespaces": 32, 00:12:51.282 "min_cntlid": 1, 00:12:51.282 "max_cntlid": 65519, 00:12:51.282 "namespaces": [ 00:12:51.282 { 00:12:51.282 "nsid": 1, 00:12:51.282 "bdev_name": "Malloc2", 00:12:51.282 "name": "Malloc2", 00:12:51.282 "nguid": "C8717AC6B4974EA6AAE03108E110B11B", 00:12:51.282 "uuid": "c8717ac6-b497-4ea6-aae0-3108e110b11b" 00:12:51.282 } 00:12:51.282 ] 00:12:51.282 } 00:12:51.282 ] 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=746716 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:51.282 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:51.282 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.543 Malloc4 00:12:51.544 [2024-06-10 10:38:15.607875] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.544 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:51.544 [2024-06-10 10:38:15.753833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.544 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.544 Asynchronous Event Request test 00:12:51.544 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.544 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.544 Registering asynchronous event callbacks... 00:12:51.544 Starting namespace attribute notice tests for all controllers... 00:12:51.544 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:51.544 aer_cb - Changed Namespace 00:12:51.544 Cleaning up... 00:12:51.804 [ 00:12:51.804 { 00:12:51.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.804 "subtype": "Discovery", 00:12:51.804 "listen_addresses": [], 00:12:51.804 "allow_any_host": true, 00:12:51.804 "hosts": [] 00:12:51.804 }, 00:12:51.805 { 00:12:51.805 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.805 "subtype": "NVMe", 00:12:51.805 "listen_addresses": [ 00:12:51.805 { 00:12:51.805 "trtype": "VFIOUSER", 00:12:51.805 "adrfam": "IPv4", 00:12:51.805 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.805 "trsvcid": "0" 00:12:51.805 } 00:12:51.805 ], 00:12:51.805 "allow_any_host": true, 00:12:51.805 "hosts": [], 00:12:51.805 "serial_number": "SPDK1", 00:12:51.805 "model_number": "SPDK bdev Controller", 00:12:51.805 "max_namespaces": 32, 00:12:51.805 "min_cntlid": 1, 00:12:51.805 "max_cntlid": 65519, 00:12:51.805 "namespaces": [ 00:12:51.805 { 00:12:51.805 "nsid": 1, 00:12:51.805 "bdev_name": "Malloc1", 00:12:51.805 "name": "Malloc1", 00:12:51.805 "nguid": "491A9BA3043B478DA5F9DF849B8240C5", 00:12:51.805 "uuid": "491a9ba3-043b-478d-a5f9-df849b8240c5" 00:12:51.805 }, 00:12:51.805 { 00:12:51.805 "nsid": 2, 00:12:51.805 "bdev_name": "Malloc3", 00:12:51.805 "name": "Malloc3", 00:12:51.805 "nguid": "BEBCA611AB8E45169F8638B002DB58D5", 00:12:51.805 "uuid": "bebca611-ab8e-4516-9f86-38b002db58d5" 00:12:51.805 } 00:12:51.805 ] 00:12:51.805 }, 00:12:51.805 { 00:12:51.805 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.805 "subtype": "NVMe", 00:12:51.805 "listen_addresses": [ 00:12:51.805 { 00:12:51.805 "trtype": "VFIOUSER", 00:12:51.805 "adrfam": "IPv4", 00:12:51.805 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.805 "trsvcid": "0" 00:12:51.805 } 00:12:51.805 ], 00:12:51.805 "allow_any_host": true, 00:12:51.805 "hosts": [], 00:12:51.805 "serial_number": "SPDK2", 00:12:51.805 "model_number": "SPDK bdev Controller", 00:12:51.805 "max_namespaces": 32, 00:12:51.805 "min_cntlid": 1, 00:12:51.805 "max_cntlid": 65519, 00:12:51.805 "namespaces": [ 00:12:51.805 { 00:12:51.805 "nsid": 1, 00:12:51.805 "bdev_name": "Malloc2", 00:12:51.805 "name": "Malloc2", 00:12:51.805 "nguid": "C8717AC6B4974EA6AAE03108E110B11B", 00:12:51.805 "uuid": "c8717ac6-b497-4ea6-aae0-3108e110b11b" 00:12:51.805 }, 00:12:51.805 { 00:12:51.805 "nsid": 2, 00:12:51.805 "bdev_name": "Malloc4", 00:12:51.805 "name": "Malloc4", 00:12:51.805 "nguid": "F044DBA086404614AB5430A1C6FAFAB1", 00:12:51.805 "uuid": "f044dba0-8640-4614-ab54-30a1c6fafab1" 00:12:51.805 } 00:12:51.805 ] 00:12:51.805 } 00:12:51.805 ] 00:12:51.805 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 746716 00:12:51.805 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:51.805 10:38:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 737673 00:12:51.805 10:38:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 737673 ']' 00:12:51.805 10:38:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 737673 00:12:51.805 10:38:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:12:51.805 10:38:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:51.805 10:38:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 737673 00:12:51.805 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:51.805 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:51.805 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 737673' 00:12:51.805 killing process with pid 737673 00:12:51.805 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 737673 00:12:51.805 [2024-06-10 10:38:16.004106] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:51.805 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 737673 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=746868 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 746868' 00:12:52.066 Process pid: 746868 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 746868 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 746868 ']' 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:52.066 10:38:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:52.066 [2024-06-10 10:38:16.232621] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:52.066 [2024-06-10 10:38:16.233573] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:12:52.066 [2024-06-10 10:38:16.233616] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.066 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.066 [2024-06-10 10:38:16.295760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.328 [2024-06-10 10:38:16.361169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.328 [2024-06-10 10:38:16.361209] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.328 [2024-06-10 10:38:16.361216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.328 [2024-06-10 10:38:16.361222] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.328 [2024-06-10 10:38:16.361228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.328 [2024-06-10 10:38:16.361320] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.328 [2024-06-10 10:38:16.361449] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.328 [2024-06-10 10:38:16.361604] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.328 [2024-06-10 10:38:16.361606] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.328 [2024-06-10 10:38:16.426691] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:52.328 [2024-06-10 10:38:16.426797] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:52.328 [2024-06-10 10:38:16.427819] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:52.328 [2024-06-10 10:38:16.428348] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:52.328 [2024-06-10 10:38:16.428415] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:52.900 10:38:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:52.900 10:38:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:12:52.900 10:38:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:53.842 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:54.103 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:54.103 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:54.103 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.103 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:54.103 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:54.103 Malloc1 00:12:54.103 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:54.364 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:54.625 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:54.625 [2024-06-10 10:38:18.826038] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:54.625 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.625 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:54.625 10:38:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:54.886 Malloc2 00:12:54.886 10:38:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:55.148 10:38:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:55.148 10:38:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 746868 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 746868 ']' 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 746868 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 746868 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 746868' 00:12:55.410 killing process with pid 746868 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 746868 00:12:55.410 [2024-06-10 10:38:19.583175] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:55.410 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 746868 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:55.672 00:12:55.672 real 0m50.538s 00:12:55.672 user 3m20.343s 00:12:55.672 sys 0m2.973s 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:55.672 ************************************ 00:12:55.672 END TEST nvmf_vfio_user 00:12:55.672 ************************************ 00:12:55.672 10:38:19 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:55.672 10:38:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:55.672 10:38:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:55.672 10:38:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.672 ************************************ 00:12:55.672 START TEST nvmf_vfio_user_nvme_compliance 00:12:55.672 ************************************ 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:55.672 * Looking for test storage... 00:12:55.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.672 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=747621 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 747621' 00:12:55.673 Process pid: 747621 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 747621 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 747621 ']' 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:55.673 10:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:55.934 [2024-06-10 10:38:20.003119] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:12:55.934 [2024-06-10 10:38:20.003191] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.934 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.934 [2024-06-10 10:38:20.070527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.934 [2024-06-10 10:38:20.147346] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.934 [2024-06-10 10:38:20.147385] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.934 [2024-06-10 10:38:20.147393] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.934 [2024-06-10 10:38:20.147400] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.934 [2024-06-10 10:38:20.147405] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.934 [2024-06-10 10:38:20.147542] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.934 [2024-06-10 10:38:20.147665] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.934 [2024-06-10 10:38:20.147667] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.506 10:38:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:56.506 10:38:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:12:56.506 10:38:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.892 malloc0 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.892 [2024-06-10 10:38:21.866918] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:57.892 10:38:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:57.892 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.892 00:12:57.892 00:12:57.892 CUnit - A unit testing framework for C - Version 2.1-3 00:12:57.892 http://cunit.sourceforge.net/ 00:12:57.892 00:12:57.892 00:12:57.892 Suite: nvme_compliance 00:12:57.892 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-10 10:38:22.045318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.892 [2024-06-10 10:38:22.046632] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:57.892 [2024-06-10 10:38:22.046642] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:57.892 [2024-06-10 10:38:22.046647] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:57.892 [2024-06-10 10:38:22.048334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:57.892 passed 00:12:57.892 Test: admin_identify_ctrlr_verify_fused ...[2024-06-10 10:38:22.144939] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.892 [2024-06-10 10:38:22.147957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.153 passed 00:12:58.153 Test: admin_identify_ns ...[2024-06-10 10:38:22.248181] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.153 [2024-06-10 10:38:22.309252] ctrlr.c:2707:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:58.153 [2024-06-10 10:38:22.317258] ctrlr.c:2707:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:58.153 [2024-06-10 10:38:22.338372] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.153 passed 00:12:58.153 Test: admin_get_features_mandatory_features ...[2024-06-10 10:38:22.430004] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.153 [2024-06-10 10:38:22.433031] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.412 passed 00:12:58.412 Test: admin_get_features_optional_features ...[2024-06-10 10:38:22.529599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.412 [2024-06-10 10:38:22.532617] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.412 passed 00:12:58.412 Test: admin_set_features_number_of_queues ...[2024-06-10 10:38:22.628494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.672 [2024-06-10 10:38:22.733347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.672 passed 00:12:58.672 Test: admin_get_log_page_mandatory_logs ...[2024-06-10 10:38:22.827735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.672 [2024-06-10 10:38:22.830761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.672 passed 00:12:58.672 Test: admin_get_log_page_with_lpo ...[2024-06-10 10:38:22.925881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.932 [2024-06-10 10:38:22.993255] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:58.932 [2024-06-10 10:38:23.006314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.932 passed 00:12:58.932 Test: fabric_property_get ...[2024-06-10 10:38:23.099983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.932 [2024-06-10 10:38:23.101207] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:58.932 [2024-06-10 10:38:23.103000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.932 passed 00:12:58.932 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-10 10:38:23.199557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.932 [2024-06-10 10:38:23.200798] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:58.932 [2024-06-10 10:38:23.202580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.193 passed 00:12:59.193 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-10 10:38:23.296479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.193 [2024-06-10 10:38:23.380253] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.193 [2024-06-10 10:38:23.396253] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.193 [2024-06-10 10:38:23.401344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.193 passed 00:12:59.454 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-10 10:38:23.499675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.454 [2024-06-10 10:38:23.500898] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:59.454 [2024-06-10 10:38:23.502692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.454 passed 00:12:59.454 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-10 10:38:23.596479] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.454 [2024-06-10 10:38:23.676252] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:59.454 [2024-06-10 10:38:23.700251] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.454 [2024-06-10 10:38:23.705339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.716 passed 00:12:59.716 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-10 10:38:23.799547] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.716 [2024-06-10 10:38:23.800768] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:59.716 [2024-06-10 10:38:23.800789] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:59.716 [2024-06-10 10:38:23.802559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.716 passed 00:12:59.716 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-10 10:38:23.900513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.716 [2024-06-10 10:38:23.992250] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:59.716 [2024-06-10 10:38:24.000255] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:59.977 [2024-06-10 10:38:24.008249] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:59.977 [2024-06-10 10:38:24.016248] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:59.977 [2024-06-10 10:38:24.045338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.977 passed 00:12:59.977 Test: admin_create_io_sq_verify_pc ...[2024-06-10 10:38:24.136959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.977 [2024-06-10 10:38:24.153258] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:59.977 [2024-06-10 10:38:24.171116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.977 passed 00:13:00.238 Test: admin_create_io_qp_max_qps ...[2024-06-10 10:38:24.267651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:01.181 [2024-06-10 10:38:25.375253] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:01.753 [2024-06-10 10:38:25.766240] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:01.753 passed 00:13:01.753 Test: admin_create_io_sq_shared_cq ...[2024-06-10 10:38:25.860427] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:01.753 [2024-06-10 10:38:25.991252] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:01.753 [2024-06-10 10:38:26.028306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:02.014 passed 00:13:02.014 00:13:02.014 Run Summary: Type Total Ran Passed Failed Inactive 00:13:02.014 suites 1 1 n/a 0 0 00:13:02.014 tests 18 18 18 0 0 00:13:02.014 asserts 360 360 360 0 n/a 00:13:02.014 00:13:02.014 Elapsed time = 1.672 seconds 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 747621 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 747621 ']' 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 747621 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 747621 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 747621' 00:13:02.014 killing process with pid 747621 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 747621 00:13:02.014 [2024-06-10 10:38:26.138491] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 747621 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:02.014 00:13:02.014 real 0m6.469s 00:13:02.014 user 0m18.509s 00:13:02.014 sys 0m0.469s 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:02.014 10:38:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:02.014 ************************************ 00:13:02.014 END TEST nvmf_vfio_user_nvme_compliance 00:13:02.014 ************************************ 00:13:02.276 10:38:26 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:02.276 10:38:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:02.276 10:38:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:02.276 10:38:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.276 ************************************ 00:13:02.276 START TEST nvmf_vfio_user_fuzz 00:13:02.276 ************************************ 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:02.276 * Looking for test storage... 00:13:02.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.276 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=749018 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 749018' 00:13:02.277 Process pid: 749018 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 749018 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 749018 ']' 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:02.277 10:38:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.219 10:38:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:03.219 10:38:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:13:03.219 10:38:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.161 malloc0 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:04.161 10:38:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:36.274 Fuzzing completed. Shutting down the fuzz application 00:13:36.274 00:13:36.274 Dumping successful admin opcodes: 00:13:36.274 8, 9, 10, 24, 00:13:36.274 Dumping successful io opcodes: 00:13:36.274 0, 00:13:36.274 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1139951, total successful commands: 4490, random_seed: 482703360 00:13:36.274 NS: 0x200003a1ef00 admin qp, Total commands completed: 143268, total successful commands: 1164, random_seed: 1474896384 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 749018 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 749018 ']' 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 749018 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 749018 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 749018' 00:13:36.274 killing process with pid 749018 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 749018 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 749018 00:13:36.274 10:38:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:36.274 10:39:00 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:36.274 00:13:36.274 real 0m33.659s 00:13:36.274 user 0m38.365s 00:13:36.274 sys 0m25.400s 00:13:36.274 10:39:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:36.274 10:39:00 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.274 ************************************ 00:13:36.274 END TEST nvmf_vfio_user_fuzz 00:13:36.274 ************************************ 00:13:36.274 10:39:00 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:36.274 10:39:00 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:36.274 10:39:00 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:36.274 10:39:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.274 ************************************ 00:13:36.274 START TEST nvmf_host_management 00:13:36.274 ************************************ 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:36.274 * Looking for test storage... 00:13:36.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.274 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.275 10:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:44.417 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:44.417 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.417 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:44.418 Found net devices under 0000:31:00.0: cvl_0_0 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:44.418 Found net devices under 0000:31:00.1: cvl_0_1 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:44.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:13:44.418 00:13:44.418 --- 10.0.0.2 ping statistics --- 00:13:44.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.418 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:13:44.418 00:13:44.418 --- 10.0.0.1 ping statistics --- 00:13:44.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.418 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=759489 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 759489 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 759489 ']' 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:44.418 10:39:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:44.418 [2024-06-10 10:39:07.692200] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:13:44.418 [2024-06-10 10:39:07.692277] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.418 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.418 [2024-06-10 10:39:07.780870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.418 [2024-06-10 10:39:07.877164] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.418 [2024-06-10 10:39:07.877228] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.418 [2024-06-10 10:39:07.877236] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.418 [2024-06-10 10:39:07.877250] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.418 [2024-06-10 10:39:07.877257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.418 [2024-06-10 10:39:07.877397] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.418 [2024-06-10 10:39:07.877565] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.418 [2024-06-10 10:39:07.877730] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.418 [2024-06-10 10:39:07.877731] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:44.418 [2024-06-10 10:39:08.516707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.418 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:44.418 Malloc0 00:13:44.419 [2024-06-10 10:39:08.579937] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:44.419 [2024-06-10 10:39:08.580195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=759734 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 759734 /var/tmp/bdevperf.sock 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 759734 ']' 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:44.419 { 00:13:44.419 "params": { 00:13:44.419 "name": "Nvme$subsystem", 00:13:44.419 "trtype": "$TEST_TRANSPORT", 00:13:44.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:44.419 "adrfam": "ipv4", 00:13:44.419 "trsvcid": "$NVMF_PORT", 00:13:44.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:44.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:44.419 "hdgst": ${hdgst:-false}, 00:13:44.419 "ddgst": ${ddgst:-false} 00:13:44.419 }, 00:13:44.419 "method": "bdev_nvme_attach_controller" 00:13:44.419 } 00:13:44.419 EOF 00:13:44.419 )") 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:44.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:44.419 10:39:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:44.419 "params": { 00:13:44.419 "name": "Nvme0", 00:13:44.419 "trtype": "tcp", 00:13:44.419 "traddr": "10.0.0.2", 00:13:44.419 "adrfam": "ipv4", 00:13:44.419 "trsvcid": "4420", 00:13:44.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:44.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:44.419 "hdgst": false, 00:13:44.419 "ddgst": false 00:13:44.419 }, 00:13:44.419 "method": "bdev_nvme_attach_controller" 00:13:44.419 }' 00:13:44.419 [2024-06-10 10:39:08.681656] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:13:44.419 [2024-06-10 10:39:08.681708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759734 ] 00:13:44.679 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.679 [2024-06-10 10:39:08.741612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.679 [2024-06-10 10:39:08.806605] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.940 Running I/O for 10 seconds... 00:13:45.200 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:45.200 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:13:45.200 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:45.200 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:45.201 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.463 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:45.463 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:13:45.463 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:13:45.463 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:45.463 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:45.463 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:45.463 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:45.463 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:45.463 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.463 [2024-06-10 10:39:09.528942] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529015] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529023] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529045] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529052] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529065] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529111] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529141] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529154] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.463 [2024-06-10 10:39:09.529166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.464 [2024-06-10 10:39:09.529173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.464 [2024-06-10 10:39:09.529179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c8a0 is same with the state(5) to be set 00:13:45.464 [2024-06-10 10:39:09.529699] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:13:45.464 [2024-06-10 10:39:09.532184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.464 [2024-06-10 10:39:09.532202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.532213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.464 [2024-06-10 10:39:09.532223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.532231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.464 [2024-06-10 10:39:09.532238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.532249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.464 [2024-06-10 10:39:09.532256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.532264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448130 is same with the state(5) to be set 00:13:45.464 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:45.464 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:45.464 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:45.464 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:45.464 [2024-06-10 10:39:09.542194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1448130 (9): Bad file descriptor 00:13:45.464 10:39:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:45.464 10:39:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:45.464 [2024-06-10 10:39:09.552257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.464 [2024-06-10 10:39:09.552808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.464 [2024-06-10 10:39:09.552817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.552988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.552995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:45.465 [2024-06-10 10:39:09.553373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.465 [2024-06-10 10:39:09.553382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1859280 is same with the state(5) to be set 00:13:45.465 task offset: 65536 on job bdev=Nvme0n1 fails 00:13:45.465 00:13:45.465 Latency(us) 00:13:45.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.465 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:45.465 Job: Nvme0n1 ended in about 0.44 seconds with error 00:13:45.465 Verification LBA range: start 0x0 length 0x400 00:13:45.465 Nvme0n1 : 0.44 1175.64 73.48 146.95 0.00 47056.45 12014.93 39321.60 00:13:45.465 =================================================================================================================== 00:13:45.465 Total : 1175.64 73.48 146.95 0.00 47056.45 12014.93 39321.60 00:13:45.465 [2024-06-10 10:39:09.556590] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:45.465 [2024-06-10 10:39:09.556612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:45.465 [2024-06-10 10:39:09.608851] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 759734 00:13:46.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (759734) - No such process 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:46.462 { 00:13:46.462 "params": { 00:13:46.462 "name": "Nvme$subsystem", 00:13:46.462 "trtype": "$TEST_TRANSPORT", 00:13:46.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:46.462 "adrfam": "ipv4", 00:13:46.462 "trsvcid": "$NVMF_PORT", 00:13:46.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:46.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:46.462 "hdgst": ${hdgst:-false}, 00:13:46.462 "ddgst": ${ddgst:-false} 00:13:46.462 }, 00:13:46.462 "method": "bdev_nvme_attach_controller" 00:13:46.462 } 00:13:46.462 EOF 00:13:46.462 )") 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:46.462 10:39:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:46.462 "params": { 00:13:46.462 "name": "Nvme0", 00:13:46.462 "trtype": "tcp", 00:13:46.462 "traddr": "10.0.0.2", 00:13:46.462 "adrfam": "ipv4", 00:13:46.462 "trsvcid": "4420", 00:13:46.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:46.462 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:46.462 "hdgst": false, 00:13:46.462 "ddgst": false 00:13:46.462 }, 00:13:46.462 "method": "bdev_nvme_attach_controller" 00:13:46.462 }' 00:13:46.462 [2024-06-10 10:39:10.609659] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:13:46.462 [2024-06-10 10:39:10.609719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760544 ] 00:13:46.462 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.462 [2024-06-10 10:39:10.669684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.462 [2024-06-10 10:39:10.734807] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.723 Running I/O for 1 seconds... 00:13:48.106 00:13:48.106 Latency(us) 00:13:48.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.106 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:48.106 Verification LBA range: start 0x0 length 0x400 00:13:48.106 Nvme0n1 : 1.03 1363.05 85.19 0.00 0.00 46193.67 12342.61 36700.16 00:13:48.106 =================================================================================================================== 00:13:48.106 Total : 1363.05 85.19 0.00 0.00 46193.67 12342.61 36700.16 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.106 rmmod nvme_tcp 00:13:48.106 rmmod nvme_fabrics 00:13:48.106 rmmod nvme_keyring 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 759489 ']' 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 759489 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 759489 ']' 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 759489 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 759489 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 759489' 00:13:48.106 killing process with pid 759489 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 759489 00:13:48.106 [2024-06-10 10:39:12.228612] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 759489 00:13:48.106 [2024-06-10 10:39:12.334483] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.106 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.107 10:39:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.107 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.107 10:39:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.654 10:39:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:50.654 10:39:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:50.654 00:13:50.654 real 0m14.334s 00:13:50.654 user 0m22.525s 00:13:50.654 sys 0m6.461s 00:13:50.654 10:39:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:50.654 10:39:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.654 ************************************ 00:13:50.654 END TEST nvmf_host_management 00:13:50.654 ************************************ 00:13:50.654 10:39:14 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.654 10:39:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:50.654 10:39:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:50.654 10:39:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.654 ************************************ 00:13:50.654 START TEST nvmf_lvol 00:13:50.654 ************************************ 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:50.654 * Looking for test storage... 00:13:50.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:50.654 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:50.655 10:39:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.655 10:39:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.247 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:57.508 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:57.508 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:57.508 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:57.509 Found net devices under 0000:31:00.0: cvl_0_0 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:57.509 Found net devices under 0000:31:00.1: cvl_0_1 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:57.509 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:57.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:13:57.770 00:13:57.770 --- 10.0.0.2 ping statistics --- 00:13:57.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.770 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:13:57.770 00:13:57.770 --- 10.0.0.1 ping statistics --- 00:13:57.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.770 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.770 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=765073 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 765073 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 765073 ']' 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:57.771 10:39:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:57.771 [2024-06-10 10:39:21.941770] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:13:57.771 [2024-06-10 10:39:21.941855] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.771 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.771 [2024-06-10 10:39:22.013172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:58.030 [2024-06-10 10:39:22.078277] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.030 [2024-06-10 10:39:22.078316] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.030 [2024-06-10 10:39:22.078323] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.030 [2024-06-10 10:39:22.078330] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.030 [2024-06-10 10:39:22.078335] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.030 [2024-06-10 10:39:22.078471] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.030 [2024-06-10 10:39:22.078662] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.030 [2024-06-10 10:39:22.078665] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.600 10:39:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:58.600 10:39:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:13:58.600 10:39:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.600 10:39:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:58.600 10:39:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:58.600 10:39:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.600 10:39:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:58.861 [2024-06-10 10:39:22.890612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.861 10:39:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:58.861 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:58.861 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:59.126 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:59.126 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:59.386 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:59.386 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=aaa8cc02-c1ee-4c4a-ba28-51e50d6ea2a2 00:13:59.386 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aaa8cc02-c1ee-4c4a-ba28-51e50d6ea2a2 lvol 20 00:13:59.647 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a901525a-a569-4c41-b54a-2223800a01cd 00:13:59.647 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:59.908 10:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a901525a-a569-4c41-b54a-2223800a01cd 00:13:59.908 10:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:00.170 [2024-06-10 10:39:24.220469] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:00.170 [2024-06-10 10:39:24.220735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.170 10:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:00.170 10:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=765617 00:14:00.170 10:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:00.171 10:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:00.171 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.113 10:39:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a901525a-a569-4c41-b54a-2223800a01cd MY_SNAPSHOT 00:14:01.375 10:39:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=736ba0a9-06fd-40bb-a70e-746d26c81d85 00:14:01.375 10:39:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a901525a-a569-4c41-b54a-2223800a01cd 30 00:14:01.636 10:39:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 736ba0a9-06fd-40bb-a70e-746d26c81d85 MY_CLONE 00:14:01.898 10:39:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d0389b94-0d99-4c93-a212-e33a2cf67812 00:14:01.898 10:39:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d0389b94-0d99-4c93-a212-e33a2cf67812 00:14:02.159 10:39:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 765617 00:14:12.163 Initializing NVMe Controllers 00:14:12.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:12.163 Controller IO queue size 128, less than required. 00:14:12.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:12.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:12.163 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:12.163 Initialization complete. Launching workers. 00:14:12.163 ======================================================== 00:14:12.163 Latency(us) 00:14:12.163 Device Information : IOPS MiB/s Average min max 00:14:12.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12314.00 48.10 10397.89 1452.41 48392.70 00:14:12.163 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17922.40 70.01 7141.48 1151.87 53359.90 00:14:12.163 ======================================================== 00:14:12.163 Total : 30236.40 118.11 8467.68 1151.87 53359.90 00:14:12.163 00:14:12.163 10:39:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:12.163 10:39:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a901525a-a569-4c41-b54a-2223800a01cd 00:14:12.163 10:39:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aaa8cc02-c1ee-4c4a-ba28-51e50d6ea2a2 00:14:12.163 10:39:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:12.163 10:39:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:12.163 10:39:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:12.163 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:12.164 rmmod nvme_tcp 00:14:12.164 rmmod nvme_fabrics 00:14:12.164 rmmod nvme_keyring 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 765073 ']' 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 765073 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 765073 ']' 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 765073 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 765073 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 765073' 00:14:12.164 killing process with pid 765073 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 765073 00:14:12.164 [2024-06-10 10:39:35.359122] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 765073 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.164 10:39:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.565 00:14:13.565 real 0m23.084s 00:14:13.565 user 1m3.135s 00:14:13.565 sys 0m7.738s 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:13.565 ************************************ 00:14:13.565 END TEST nvmf_lvol 00:14:13.565 ************************************ 00:14:13.565 10:39:37 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:13.565 10:39:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:13.565 10:39:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:13.565 10:39:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.565 ************************************ 00:14:13.565 START TEST nvmf_lvs_grow 00:14:13.565 ************************************ 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:13.565 * Looking for test storage... 00:14:13.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.565 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.566 10:39:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:21.715 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:21.715 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.715 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:21.716 Found net devices under 0000:31:00.0: cvl_0_0 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:21.716 Found net devices under 0000:31:00.1: cvl_0_1 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.716 10:39:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:14:21.716 00:14:21.716 --- 10.0.0.2 ping statistics --- 00:14:21.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.716 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:14:21.716 00:14:21.716 --- 10.0.0.1 ping statistics --- 00:14:21.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.716 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=772039 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 772039 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 772039 ']' 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:21.716 10:39:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:21.716 [2024-06-10 10:39:45.273394] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:14:21.716 [2024-06-10 10:39:45.273458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.716 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.716 [2024-06-10 10:39:45.346118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.716 [2024-06-10 10:39:45.419826] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.716 [2024-06-10 10:39:45.419865] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.716 [2024-06-10 10:39:45.419873] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.716 [2024-06-10 10:39:45.419879] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.716 [2024-06-10 10:39:45.419885] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.716 [2024-06-10 10:39:45.419914] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:22.005 [2024-06-10 10:39:46.231633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:22.005 10:39:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:22.267 ************************************ 00:14:22.267 START TEST lvs_grow_clean 00:14:22.267 ************************************ 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:22.267 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:22.529 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:22.529 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:22.529 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:22.529 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:22.529 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:22.529 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 lvol 150 00:14:22.790 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=46f5ffd7-97ae-48f0-b810-5b72a663c149 00:14:22.790 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:22.790 10:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:23.052 [2024-06-10 10:39:47.106672] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:23.052 [2024-06-10 10:39:47.106724] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:23.052 true 00:14:23.052 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:23.052 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:23.052 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:23.052 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:23.312 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 46f5ffd7-97ae-48f0-b810-5b72a663c149 00:14:23.312 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:23.574 [2024-06-10 10:39:47.680213] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:23.574 [2024-06-10 10:39:47.680442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=772570 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 772570 /var/tmp/bdevperf.sock 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 772570 ']' 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:23.574 10:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:23.843 [2024-06-10 10:39:47.881477] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:14:23.843 [2024-06-10 10:39:47.881525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772570 ] 00:14:23.843 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.843 [2024-06-10 10:39:47.956923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.843 [2024-06-10 10:39:48.021259] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.419 10:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:24.419 10:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:14:24.419 10:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:24.681 Nvme0n1 00:14:24.681 10:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:24.943 [ 00:14:24.943 { 00:14:24.943 "name": "Nvme0n1", 00:14:24.943 "aliases": [ 00:14:24.943 "46f5ffd7-97ae-48f0-b810-5b72a663c149" 00:14:24.943 ], 00:14:24.943 "product_name": "NVMe disk", 00:14:24.943 "block_size": 4096, 00:14:24.943 "num_blocks": 38912, 00:14:24.943 "uuid": "46f5ffd7-97ae-48f0-b810-5b72a663c149", 00:14:24.943 "assigned_rate_limits": { 00:14:24.943 "rw_ios_per_sec": 0, 00:14:24.943 "rw_mbytes_per_sec": 0, 00:14:24.943 "r_mbytes_per_sec": 0, 00:14:24.943 "w_mbytes_per_sec": 0 00:14:24.943 }, 00:14:24.943 "claimed": false, 00:14:24.943 "zoned": false, 00:14:24.943 "supported_io_types": { 00:14:24.943 "read": true, 00:14:24.943 "write": true, 00:14:24.943 "unmap": true, 00:14:24.943 "write_zeroes": true, 00:14:24.943 "flush": true, 00:14:24.943 "reset": true, 00:14:24.943 "compare": true, 00:14:24.943 "compare_and_write": true, 00:14:24.943 "abort": true, 00:14:24.943 "nvme_admin": true, 00:14:24.943 "nvme_io": true 00:14:24.943 }, 00:14:24.943 "memory_domains": [ 00:14:24.943 { 00:14:24.943 "dma_device_id": "system", 00:14:24.943 "dma_device_type": 1 00:14:24.943 } 00:14:24.943 ], 00:14:24.943 "driver_specific": { 00:14:24.943 "nvme": [ 00:14:24.943 { 00:14:24.943 "trid": { 00:14:24.943 "trtype": "TCP", 00:14:24.943 "adrfam": "IPv4", 00:14:24.943 "traddr": "10.0.0.2", 00:14:24.943 "trsvcid": "4420", 00:14:24.943 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:24.943 }, 00:14:24.943 "ctrlr_data": { 00:14:24.943 "cntlid": 1, 00:14:24.943 "vendor_id": "0x8086", 00:14:24.943 "model_number": "SPDK bdev Controller", 00:14:24.943 "serial_number": "SPDK0", 00:14:24.943 "firmware_revision": "24.09", 00:14:24.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:24.943 "oacs": { 00:14:24.943 "security": 0, 00:14:24.943 "format": 0, 00:14:24.943 "firmware": 0, 00:14:24.943 "ns_manage": 0 00:14:24.943 }, 00:14:24.943 "multi_ctrlr": true, 00:14:24.943 "ana_reporting": false 00:14:24.943 }, 00:14:24.943 "vs": { 00:14:24.943 "nvme_version": "1.3" 00:14:24.943 }, 00:14:24.943 "ns_data": { 00:14:24.943 "id": 1, 00:14:24.943 "can_share": true 00:14:24.943 } 00:14:24.943 } 00:14:24.943 ], 00:14:24.943 "mp_policy": "active_passive" 00:14:24.943 } 00:14:24.943 } 00:14:24.943 ] 00:14:24.943 10:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=772904 00:14:24.943 10:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:24.943 10:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.943 Running I/O for 10 seconds... 00:14:25.888 Latency(us) 00:14:25.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.888 Nvme0n1 : 1.00 18179.00 71.01 0.00 0.00 0.00 0.00 0.00 00:14:25.888 =================================================================================================================== 00:14:25.888 Total : 18179.00 71.01 0.00 0.00 0.00 0.00 0.00 00:14:25.888 00:14:26.831 10:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:27.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.092 Nvme0n1 : 2.00 18277.00 71.39 0.00 0.00 0.00 0.00 0.00 00:14:27.092 =================================================================================================================== 00:14:27.092 Total : 18277.00 71.39 0.00 0.00 0.00 0.00 0.00 00:14:27.092 00:14:27.092 true 00:14:27.092 10:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:27.092 10:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:27.353 10:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:27.353 10:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:27.353 10:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 772904 00:14:27.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.923 Nvme0n1 : 3.00 18307.33 71.51 0.00 0.00 0.00 0.00 0.00 00:14:27.923 =================================================================================================================== 00:14:27.923 Total : 18307.33 71.51 0.00 0.00 0.00 0.00 0.00 00:14:27.923 00:14:29.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.308 Nvme0n1 : 4.00 18322.25 71.57 0.00 0.00 0.00 0.00 0.00 00:14:29.308 =================================================================================================================== 00:14:29.308 Total : 18322.25 71.57 0.00 0.00 0.00 0.00 0.00 00:14:29.308 00:14:29.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.891 Nvme0n1 : 5.00 18344.20 71.66 0.00 0.00 0.00 0.00 0.00 00:14:29.891 =================================================================================================================== 00:14:29.891 Total : 18344.20 71.66 0.00 0.00 0.00 0.00 0.00 00:14:29.891 00:14:31.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.277 Nvme0n1 : 6.00 18358.83 71.71 0.00 0.00 0.00 0.00 0.00 00:14:31.277 =================================================================================================================== 00:14:31.277 Total : 18358.83 71.71 0.00 0.00 0.00 0.00 0.00 00:14:31.277 00:14:32.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.218 Nvme0n1 : 7.00 18369.29 71.76 0.00 0.00 0.00 0.00 0.00 00:14:32.218 =================================================================================================================== 00:14:32.218 Total : 18369.29 71.76 0.00 0.00 0.00 0.00 0.00 00:14:32.218 00:14:33.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.160 Nvme0n1 : 8.00 18377.00 71.79 0.00 0.00 0.00 0.00 0.00 00:14:33.160 =================================================================================================================== 00:14:33.161 Total : 18377.00 71.79 0.00 0.00 0.00 0.00 0.00 00:14:33.161 00:14:34.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.195 Nvme0n1 : 9.00 18385.00 71.82 0.00 0.00 0.00 0.00 0.00 00:14:34.195 =================================================================================================================== 00:14:34.195 Total : 18385.00 71.82 0.00 0.00 0.00 0.00 0.00 00:14:34.195 00:14:35.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.137 Nvme0n1 : 10.00 18394.40 71.85 0.00 0.00 0.00 0.00 0.00 00:14:35.137 =================================================================================================================== 00:14:35.137 Total : 18394.40 71.85 0.00 0.00 0.00 0.00 0.00 00:14:35.137 00:14:35.137 00:14:35.137 Latency(us) 00:14:35.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.137 Nvme0n1 : 10.01 18394.91 71.86 0.00 0.00 6954.71 2157.23 12342.61 00:14:35.137 =================================================================================================================== 00:14:35.137 Total : 18394.91 71.86 0.00 0.00 6954.71 2157.23 12342.61 00:14:35.137 0 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 772570 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 772570 ']' 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 772570 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 772570 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 772570' 00:14:35.137 killing process with pid 772570 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 772570 00:14:35.137 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.137 00:14:35.137 Latency(us) 00:14:35.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.137 =================================================================================================================== 00:14:35.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 772570 00:14:35.137 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:35.398 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:35.398 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:35.398 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:35.659 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:35.659 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:35.659 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:35.921 [2024-06-10 10:39:59.963378] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:35.921 10:39:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:35.921 request: 00:14:35.921 { 00:14:35.921 "uuid": "85edf121-80d9-4d72-b5f0-7e63d1226e70", 00:14:35.921 "method": "bdev_lvol_get_lvstores", 00:14:35.921 "req_id": 1 00:14:35.921 } 00:14:35.921 Got JSON-RPC error response 00:14:35.921 response: 00:14:35.921 { 00:14:35.921 "code": -19, 00:14:35.921 "message": "No such device" 00:14:35.921 } 00:14:35.921 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:14:35.921 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:35.921 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:35.921 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:35.921 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:36.182 aio_bdev 00:14:36.182 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 46f5ffd7-97ae-48f0-b810-5b72a663c149 00:14:36.182 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=46f5ffd7-97ae-48f0-b810-5b72a663c149 00:14:36.182 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:36.182 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:14:36.182 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:36.182 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:36.182 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:36.182 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 46f5ffd7-97ae-48f0-b810-5b72a663c149 -t 2000 00:14:36.443 [ 00:14:36.443 { 00:14:36.443 "name": "46f5ffd7-97ae-48f0-b810-5b72a663c149", 00:14:36.443 "aliases": [ 00:14:36.443 "lvs/lvol" 00:14:36.443 ], 00:14:36.443 "product_name": "Logical Volume", 00:14:36.443 "block_size": 4096, 00:14:36.443 "num_blocks": 38912, 00:14:36.443 "uuid": "46f5ffd7-97ae-48f0-b810-5b72a663c149", 00:14:36.443 "assigned_rate_limits": { 00:14:36.443 "rw_ios_per_sec": 0, 00:14:36.443 "rw_mbytes_per_sec": 0, 00:14:36.443 "r_mbytes_per_sec": 0, 00:14:36.443 "w_mbytes_per_sec": 0 00:14:36.443 }, 00:14:36.443 "claimed": false, 00:14:36.443 "zoned": false, 00:14:36.443 "supported_io_types": { 00:14:36.443 "read": true, 00:14:36.443 "write": true, 00:14:36.443 "unmap": true, 00:14:36.443 "write_zeroes": true, 00:14:36.443 "flush": false, 00:14:36.443 "reset": true, 00:14:36.443 "compare": false, 00:14:36.443 "compare_and_write": false, 00:14:36.443 "abort": false, 00:14:36.443 "nvme_admin": false, 00:14:36.443 "nvme_io": false 00:14:36.443 }, 00:14:36.443 "driver_specific": { 00:14:36.443 "lvol": { 00:14:36.443 "lvol_store_uuid": "85edf121-80d9-4d72-b5f0-7e63d1226e70", 00:14:36.443 "base_bdev": "aio_bdev", 00:14:36.443 "thin_provision": false, 00:14:36.443 "num_allocated_clusters": 38, 00:14:36.443 "snapshot": false, 00:14:36.443 "clone": false, 00:14:36.443 "esnap_clone": false 00:14:36.443 } 00:14:36.443 } 00:14:36.443 } 00:14:36.443 ] 00:14:36.443 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:14:36.443 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:36.443 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:36.704 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:36.704 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:36.704 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:36.704 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:36.704 10:40:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 46f5ffd7-97ae-48f0-b810-5b72a663c149 00:14:36.964 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85edf121-80d9-4d72-b5f0-7e63d1226e70 00:14:36.964 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.224 00:14:37.224 real 0m15.120s 00:14:37.224 user 0m14.854s 00:14:37.224 sys 0m1.265s 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:37.224 ************************************ 00:14:37.224 END TEST lvs_grow_clean 00:14:37.224 ************************************ 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:37.224 ************************************ 00:14:37.224 START TEST lvs_grow_dirty 00:14:37.224 ************************************ 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:37.224 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:37.485 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:37.485 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:37.745 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:37.745 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:37.745 10:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:37.745 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:37.745 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:37.745 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a97b7a04-ffd3-46ec-9470-c531b291c479 lvol 150 00:14:38.006 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4331626b-86a8-4938-a3a1-c39d387aab58 00:14:38.006 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:38.006 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:38.266 [2024-06-10 10:40:02.299744] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:38.266 [2024-06-10 10:40:02.299792] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:38.266 true 00:14:38.266 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:38.266 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:38.266 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:38.266 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:38.527 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4331626b-86a8-4938-a3a1-c39d387aab58 00:14:38.527 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:38.787 [2024-06-10 10:40:02.913604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.787 10:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=775651 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 775651 /var/tmp/bdevperf.sock 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 775651 ']' 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:39.047 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:39.047 [2024-06-10 10:40:03.128275] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:14:39.047 [2024-06-10 10:40:03.128328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775651 ] 00:14:39.047 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.047 [2024-06-10 10:40:03.204220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.047 [2024-06-10 10:40:03.257833] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.618 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:39.618 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:14:39.618 10:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:39.879 Nvme0n1 00:14:39.879 10:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:40.142 [ 00:14:40.142 { 00:14:40.142 "name": "Nvme0n1", 00:14:40.142 "aliases": [ 00:14:40.142 "4331626b-86a8-4938-a3a1-c39d387aab58" 00:14:40.142 ], 00:14:40.142 "product_name": "NVMe disk", 00:14:40.142 "block_size": 4096, 00:14:40.142 "num_blocks": 38912, 00:14:40.142 "uuid": "4331626b-86a8-4938-a3a1-c39d387aab58", 00:14:40.142 "assigned_rate_limits": { 00:14:40.142 "rw_ios_per_sec": 0, 00:14:40.142 "rw_mbytes_per_sec": 0, 00:14:40.142 "r_mbytes_per_sec": 0, 00:14:40.142 "w_mbytes_per_sec": 0 00:14:40.142 }, 00:14:40.142 "claimed": false, 00:14:40.142 "zoned": false, 00:14:40.142 "supported_io_types": { 00:14:40.142 "read": true, 00:14:40.142 "write": true, 00:14:40.142 "unmap": true, 00:14:40.142 "write_zeroes": true, 00:14:40.142 "flush": true, 00:14:40.142 "reset": true, 00:14:40.142 "compare": true, 00:14:40.142 "compare_and_write": true, 00:14:40.142 "abort": true, 00:14:40.142 "nvme_admin": true, 00:14:40.142 "nvme_io": true 00:14:40.142 }, 00:14:40.142 "memory_domains": [ 00:14:40.142 { 00:14:40.142 "dma_device_id": "system", 00:14:40.142 "dma_device_type": 1 00:14:40.142 } 00:14:40.142 ], 00:14:40.142 "driver_specific": { 00:14:40.142 "nvme": [ 00:14:40.142 { 00:14:40.142 "trid": { 00:14:40.142 "trtype": "TCP", 00:14:40.142 "adrfam": "IPv4", 00:14:40.142 "traddr": "10.0.0.2", 00:14:40.142 "trsvcid": "4420", 00:14:40.142 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:40.142 }, 00:14:40.142 "ctrlr_data": { 00:14:40.142 "cntlid": 1, 00:14:40.142 "vendor_id": "0x8086", 00:14:40.142 "model_number": "SPDK bdev Controller", 00:14:40.142 "serial_number": "SPDK0", 00:14:40.142 "firmware_revision": "24.09", 00:14:40.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:40.142 "oacs": { 00:14:40.142 "security": 0, 00:14:40.142 "format": 0, 00:14:40.142 "firmware": 0, 00:14:40.142 "ns_manage": 0 00:14:40.142 }, 00:14:40.142 "multi_ctrlr": true, 00:14:40.142 "ana_reporting": false 00:14:40.142 }, 00:14:40.142 "vs": { 00:14:40.142 "nvme_version": "1.3" 00:14:40.142 }, 00:14:40.142 "ns_data": { 00:14:40.142 "id": 1, 00:14:40.142 "can_share": true 00:14:40.142 } 00:14:40.142 } 00:14:40.142 ], 00:14:40.142 "mp_policy": "active_passive" 00:14:40.142 } 00:14:40.142 } 00:14:40.142 ] 00:14:40.142 10:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:40.142 10:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=775901 00:14:40.142 10:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:40.142 Running I/O for 10 seconds... 00:14:41.081 Latency(us) 00:14:41.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.081 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.081 Nvme0n1 : 1.00 18179.00 71.01 0.00 0.00 0.00 0.00 0.00 00:14:41.081 =================================================================================================================== 00:14:41.081 Total : 18179.00 71.01 0.00 0.00 0.00 0.00 0.00 00:14:41.081 00:14:42.022 10:40:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:42.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.283 Nvme0n1 : 2.00 18273.50 71.38 0.00 0.00 0.00 0.00 0.00 00:14:42.283 =================================================================================================================== 00:14:42.283 Total : 18273.50 71.38 0.00 0.00 0.00 0.00 0.00 00:14:42.283 00:14:42.283 true 00:14:42.283 10:40:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:42.283 10:40:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:42.544 10:40:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:42.544 10:40:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:42.544 10:40:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 775901 00:14:43.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.115 Nvme0n1 : 3.00 18326.33 71.59 0.00 0.00 0.00 0.00 0.00 00:14:43.115 =================================================================================================================== 00:14:43.115 Total : 18326.33 71.59 0.00 0.00 0.00 0.00 0.00 00:14:43.115 00:14:44.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.499 Nvme0n1 : 4.00 18358.75 71.71 0.00 0.00 0.00 0.00 0.00 00:14:44.499 =================================================================================================================== 00:14:44.499 Total : 18358.75 71.71 0.00 0.00 0.00 0.00 0.00 00:14:44.499 00:14:45.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.071 Nvme0n1 : 5.00 18382.40 71.81 0.00 0.00 0.00 0.00 0.00 00:14:45.071 =================================================================================================================== 00:14:45.071 Total : 18382.40 71.81 0.00 0.00 0.00 0.00 0.00 00:14:45.071 00:14:46.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.456 Nvme0n1 : 6.00 18390.67 71.84 0.00 0.00 0.00 0.00 0.00 00:14:46.456 =================================================================================================================== 00:14:46.456 Total : 18390.67 71.84 0.00 0.00 0.00 0.00 0.00 00:14:46.456 00:14:47.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.401 Nvme0n1 : 7.00 18396.57 71.86 0.00 0.00 0.00 0.00 0.00 00:14:47.401 =================================================================================================================== 00:14:47.401 Total : 18396.57 71.86 0.00 0.00 0.00 0.00 0.00 00:14:47.401 00:14:48.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.343 Nvme0n1 : 8.00 18408.75 71.91 0.00 0.00 0.00 0.00 0.00 00:14:48.343 =================================================================================================================== 00:14:48.343 Total : 18408.75 71.91 0.00 0.00 0.00 0.00 0.00 00:14:48.343 00:14:49.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.286 Nvme0n1 : 9.00 18418.44 71.95 0.00 0.00 0.00 0.00 0.00 00:14:49.287 =================================================================================================================== 00:14:49.287 Total : 18418.44 71.95 0.00 0.00 0.00 0.00 0.00 00:14:49.287 00:14:50.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.231 Nvme0n1 : 10.00 18426.20 71.98 0.00 0.00 0.00 0.00 0.00 00:14:50.231 =================================================================================================================== 00:14:50.231 Total : 18426.20 71.98 0.00 0.00 0.00 0.00 0.00 00:14:50.231 00:14:50.231 00:14:50.231 Latency(us) 00:14:50.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.231 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.231 Nvme0n1 : 10.01 18427.57 71.98 0.00 0.00 6942.58 4369.07 13926.40 00:14:50.231 =================================================================================================================== 00:14:50.231 Total : 18427.57 71.98 0.00 0.00 6942.58 4369.07 13926.40 00:14:50.231 0 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 775651 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 775651 ']' 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 775651 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 775651 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 775651' 00:14:50.231 killing process with pid 775651 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 775651 00:14:50.231 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.231 00:14:50.231 Latency(us) 00:14:50.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.231 =================================================================================================================== 00:14:50.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.231 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 775651 00:14:50.493 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.493 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:50.754 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:50.754 10:40:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 772039 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 772039 00:14:51.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 772039 Killed "${NVMF_APP[@]}" "$@" 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=778011 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 778011 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 778011 ']' 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:51.015 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:51.015 [2024-06-10 10:40:15.139743] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:14:51.015 [2024-06-10 10:40:15.139797] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.015 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.015 [2024-06-10 10:40:15.206162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.015 [2024-06-10 10:40:15.269982] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.015 [2024-06-10 10:40:15.270019] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.015 [2024-06-10 10:40:15.270026] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.015 [2024-06-10 10:40:15.270036] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.015 [2024-06-10 10:40:15.270042] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.015 [2024-06-10 10:40:15.270064] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.959 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:51.959 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:14:51.959 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.959 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:51.959 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:51.959 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.959 10:40:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:51.959 [2024-06-10 10:40:16.071203] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:51.959 [2024-06-10 10:40:16.071299] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:51.959 [2024-06-10 10:40:16.071328] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:51.959 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:51.959 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4331626b-86a8-4938-a3a1-c39d387aab58 00:14:51.959 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=4331626b-86a8-4938-a3a1-c39d387aab58 00:14:51.959 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:51.959 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:14:51.959 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:51.959 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:51.960 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:51.960 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4331626b-86a8-4938-a3a1-c39d387aab58 -t 2000 00:14:52.221 [ 00:14:52.221 { 00:14:52.221 "name": "4331626b-86a8-4938-a3a1-c39d387aab58", 00:14:52.221 "aliases": [ 00:14:52.221 "lvs/lvol" 00:14:52.221 ], 00:14:52.221 "product_name": "Logical Volume", 00:14:52.221 "block_size": 4096, 00:14:52.221 "num_blocks": 38912, 00:14:52.221 "uuid": "4331626b-86a8-4938-a3a1-c39d387aab58", 00:14:52.221 "assigned_rate_limits": { 00:14:52.221 "rw_ios_per_sec": 0, 00:14:52.221 "rw_mbytes_per_sec": 0, 00:14:52.221 "r_mbytes_per_sec": 0, 00:14:52.221 "w_mbytes_per_sec": 0 00:14:52.221 }, 00:14:52.221 "claimed": false, 00:14:52.221 "zoned": false, 00:14:52.221 "supported_io_types": { 00:14:52.221 "read": true, 00:14:52.221 "write": true, 00:14:52.221 "unmap": true, 00:14:52.221 "write_zeroes": true, 00:14:52.221 "flush": false, 00:14:52.221 "reset": true, 00:14:52.221 "compare": false, 00:14:52.221 "compare_and_write": false, 00:14:52.221 "abort": false, 00:14:52.221 "nvme_admin": false, 00:14:52.221 "nvme_io": false 00:14:52.221 }, 00:14:52.221 "driver_specific": { 00:14:52.221 "lvol": { 00:14:52.221 "lvol_store_uuid": "a97b7a04-ffd3-46ec-9470-c531b291c479", 00:14:52.221 "base_bdev": "aio_bdev", 00:14:52.221 "thin_provision": false, 00:14:52.221 "num_allocated_clusters": 38, 00:14:52.221 "snapshot": false, 00:14:52.221 "clone": false, 00:14:52.221 "esnap_clone": false 00:14:52.221 } 00:14:52.221 } 00:14:52.221 } 00:14:52.221 ] 00:14:52.221 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:14:52.221 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:52.221 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:52.481 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:52.481 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:52.481 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:52.481 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:52.481 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:52.742 [2024-06-10 10:40:16.847144] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:52.742 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:52.742 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:14:52.742 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:52.743 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.743 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:52.743 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.743 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:52.743 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.743 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:52.743 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:52.743 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:52.743 10:40:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:52.743 request: 00:14:52.743 { 00:14:52.743 "uuid": "a97b7a04-ffd3-46ec-9470-c531b291c479", 00:14:52.743 "method": "bdev_lvol_get_lvstores", 00:14:52.743 "req_id": 1 00:14:52.743 } 00:14:52.743 Got JSON-RPC error response 00:14:52.743 response: 00:14:52.743 { 00:14:52.743 "code": -19, 00:14:52.743 "message": "No such device" 00:14:52.743 } 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:53.004 aio_bdev 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4331626b-86a8-4938-a3a1-c39d387aab58 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=4331626b-86a8-4938-a3a1-c39d387aab58 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:53.004 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:53.266 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4331626b-86a8-4938-a3a1-c39d387aab58 -t 2000 00:14:53.266 [ 00:14:53.266 { 00:14:53.266 "name": "4331626b-86a8-4938-a3a1-c39d387aab58", 00:14:53.266 "aliases": [ 00:14:53.266 "lvs/lvol" 00:14:53.266 ], 00:14:53.266 "product_name": "Logical Volume", 00:14:53.266 "block_size": 4096, 00:14:53.266 "num_blocks": 38912, 00:14:53.266 "uuid": "4331626b-86a8-4938-a3a1-c39d387aab58", 00:14:53.266 "assigned_rate_limits": { 00:14:53.266 "rw_ios_per_sec": 0, 00:14:53.266 "rw_mbytes_per_sec": 0, 00:14:53.266 "r_mbytes_per_sec": 0, 00:14:53.266 "w_mbytes_per_sec": 0 00:14:53.266 }, 00:14:53.266 "claimed": false, 00:14:53.266 "zoned": false, 00:14:53.266 "supported_io_types": { 00:14:53.266 "read": true, 00:14:53.266 "write": true, 00:14:53.266 "unmap": true, 00:14:53.266 "write_zeroes": true, 00:14:53.266 "flush": false, 00:14:53.266 "reset": true, 00:14:53.266 "compare": false, 00:14:53.266 "compare_and_write": false, 00:14:53.266 "abort": false, 00:14:53.266 "nvme_admin": false, 00:14:53.266 "nvme_io": false 00:14:53.266 }, 00:14:53.266 "driver_specific": { 00:14:53.266 "lvol": { 00:14:53.266 "lvol_store_uuid": "a97b7a04-ffd3-46ec-9470-c531b291c479", 00:14:53.266 "base_bdev": "aio_bdev", 00:14:53.266 "thin_provision": false, 00:14:53.267 "num_allocated_clusters": 38, 00:14:53.267 "snapshot": false, 00:14:53.267 "clone": false, 00:14:53.267 "esnap_clone": false 00:14:53.267 } 00:14:53.267 } 00:14:53.267 } 00:14:53.267 ] 00:14:53.267 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:14:53.267 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:53.267 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:53.528 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:53.528 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:53.528 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:53.528 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:53.528 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4331626b-86a8-4938-a3a1-c39d387aab58 00:14:53.789 10:40:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a97b7a04-ffd3-46ec-9470-c531b291c479 00:14:54.050 10:40:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:54.050 10:40:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:54.050 00:14:54.050 real 0m16.787s 00:14:54.050 user 0m44.047s 00:14:54.050 sys 0m2.818s 00:14:54.050 10:40:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:54.050 10:40:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:54.050 ************************************ 00:14:54.050 END TEST lvs_grow_dirty 00:14:54.050 ************************************ 00:14:54.050 10:40:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:54.050 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:14:54.050 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:14:54.050 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:14:54.050 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:54.051 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:14:54.051 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:14:54.051 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:14:54.051 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:54.311 nvmf_trace.0 00:14:54.311 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:14:54.311 10:40:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:54.311 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:54.311 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:54.311 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.311 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.312 rmmod nvme_tcp 00:14:54.312 rmmod nvme_fabrics 00:14:54.312 rmmod nvme_keyring 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 778011 ']' 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 778011 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 778011 ']' 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 778011 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 778011 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 778011' 00:14:54.312 killing process with pid 778011 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 778011 00:14:54.312 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 778011 00:14:54.573 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:54.573 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:54.573 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:54.573 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.573 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:54.573 10:40:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.573 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.573 10:40:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.488 10:40:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.488 00:14:56.488 real 0m43.045s 00:14:56.488 user 1m4.861s 00:14:56.488 sys 0m10.015s 00:14:56.488 10:40:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:56.488 10:40:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:56.488 ************************************ 00:14:56.488 END TEST nvmf_lvs_grow 00:14:56.488 ************************************ 00:14:56.488 10:40:20 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:56.488 10:40:20 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:56.488 10:40:20 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:56.488 10:40:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.750 ************************************ 00:14:56.750 START TEST nvmf_bdev_io_wait 00:14:56.750 ************************************ 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:56.750 * Looking for test storage... 00:14:56.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:56.750 10:40:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:04.894 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:04.894 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.894 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:04.895 Found net devices under 0000:31:00.0: cvl_0_0 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:04.895 Found net devices under 0000:31:00.1: cvl_0_1 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.895 10:40:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:15:04.895 00:15:04.895 --- 10.0.0.2 ping statistics --- 00:15:04.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.895 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:15:04.895 00:15:04.895 --- 10.0.0.1 ping statistics --- 00:15:04.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.895 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=782815 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 782815 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 782815 ']' 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:04.895 10:40:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:04.895 [2024-06-10 10:40:28.267330] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:15:04.895 [2024-06-10 10:40:28.267392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.895 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.895 [2024-06-10 10:40:28.341098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.895 [2024-06-10 10:40:28.417735] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.895 [2024-06-10 10:40:28.417776] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.895 [2024-06-10 10:40:28.417784] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.895 [2024-06-10 10:40:28.417791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.895 [2024-06-10 10:40:28.417796] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.895 [2024-06-10 10:40:28.417934] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.895 [2024-06-10 10:40:28.418048] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.895 [2024-06-10 10:40:28.418204] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.895 [2024-06-10 10:40:28.418206] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:04.895 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.896 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.896 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.896 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:04.896 [2024-06-10 10:40:29.154392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.896 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:04.896 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:04.896 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:04.896 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.157 Malloc0 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.157 [2024-06-10 10:40:29.220324] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:05.157 [2024-06-10 10:40:29.220559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=783155 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=783157 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.157 { 00:15:05.157 "params": { 00:15:05.157 "name": "Nvme$subsystem", 00:15:05.157 "trtype": "$TEST_TRANSPORT", 00:15:05.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.157 "adrfam": "ipv4", 00:15:05.157 "trsvcid": "$NVMF_PORT", 00:15:05.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.157 "hdgst": ${hdgst:-false}, 00:15:05.157 "ddgst": ${ddgst:-false} 00:15:05.157 }, 00:15:05.157 "method": "bdev_nvme_attach_controller" 00:15:05.157 } 00:15:05.157 EOF 00:15:05.157 )") 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=783159 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.157 { 00:15:05.157 "params": { 00:15:05.157 "name": "Nvme$subsystem", 00:15:05.157 "trtype": "$TEST_TRANSPORT", 00:15:05.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.157 "adrfam": "ipv4", 00:15:05.157 "trsvcid": "$NVMF_PORT", 00:15:05.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.157 "hdgst": ${hdgst:-false}, 00:15:05.157 "ddgst": ${ddgst:-false} 00:15:05.157 }, 00:15:05.157 "method": "bdev_nvme_attach_controller" 00:15:05.157 } 00:15:05.157 EOF 00:15:05.157 )") 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=783162 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.157 { 00:15:05.157 "params": { 00:15:05.157 "name": "Nvme$subsystem", 00:15:05.157 "trtype": "$TEST_TRANSPORT", 00:15:05.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.157 "adrfam": "ipv4", 00:15:05.157 "trsvcid": "$NVMF_PORT", 00:15:05.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.157 "hdgst": ${hdgst:-false}, 00:15:05.157 "ddgst": ${ddgst:-false} 00:15:05.157 }, 00:15:05.157 "method": "bdev_nvme_attach_controller" 00:15:05.157 } 00:15:05.157 EOF 00:15:05.157 )") 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.157 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.157 { 00:15:05.157 "params": { 00:15:05.157 "name": "Nvme$subsystem", 00:15:05.157 "trtype": "$TEST_TRANSPORT", 00:15:05.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.157 "adrfam": "ipv4", 00:15:05.157 "trsvcid": "$NVMF_PORT", 00:15:05.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.158 "hdgst": ${hdgst:-false}, 00:15:05.158 "ddgst": ${ddgst:-false} 00:15:05.158 }, 00:15:05.158 "method": "bdev_nvme_attach_controller" 00:15:05.158 } 00:15:05.158 EOF 00:15:05.158 )") 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 783155 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.158 "params": { 00:15:05.158 "name": "Nvme1", 00:15:05.158 "trtype": "tcp", 00:15:05.158 "traddr": "10.0.0.2", 00:15:05.158 "adrfam": "ipv4", 00:15:05.158 "trsvcid": "4420", 00:15:05.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.158 "hdgst": false, 00:15:05.158 "ddgst": false 00:15:05.158 }, 00:15:05.158 "method": "bdev_nvme_attach_controller" 00:15:05.158 }' 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.158 "params": { 00:15:05.158 "name": "Nvme1", 00:15:05.158 "trtype": "tcp", 00:15:05.158 "traddr": "10.0.0.2", 00:15:05.158 "adrfam": "ipv4", 00:15:05.158 "trsvcid": "4420", 00:15:05.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.158 "hdgst": false, 00:15:05.158 "ddgst": false 00:15:05.158 }, 00:15:05.158 "method": "bdev_nvme_attach_controller" 00:15:05.158 }' 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.158 "params": { 00:15:05.158 "name": "Nvme1", 00:15:05.158 "trtype": "tcp", 00:15:05.158 "traddr": "10.0.0.2", 00:15:05.158 "adrfam": "ipv4", 00:15:05.158 "trsvcid": "4420", 00:15:05.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.158 "hdgst": false, 00:15:05.158 "ddgst": false 00:15:05.158 }, 00:15:05.158 "method": "bdev_nvme_attach_controller" 00:15:05.158 }' 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:05.158 10:40:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.158 "params": { 00:15:05.158 "name": "Nvme1", 00:15:05.158 "trtype": "tcp", 00:15:05.158 "traddr": "10.0.0.2", 00:15:05.158 "adrfam": "ipv4", 00:15:05.158 "trsvcid": "4420", 00:15:05.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.158 "hdgst": false, 00:15:05.158 "ddgst": false 00:15:05.158 }, 00:15:05.158 "method": "bdev_nvme_attach_controller" 00:15:05.158 }' 00:15:05.158 [2024-06-10 10:40:29.273725] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:15:05.158 [2024-06-10 10:40:29.273776] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:05.158 [2024-06-10 10:40:29.274051] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:15:05.158 [2024-06-10 10:40:29.274093] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:05.158 [2024-06-10 10:40:29.274834] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:15:05.158 [2024-06-10 10:40:29.274875] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:05.158 [2024-06-10 10:40:29.278133] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:15:05.158 [2024-06-10 10:40:29.278182] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:05.158 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.158 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.158 [2024-06-10 10:40:29.419176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.158 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.418 [2024-06-10 10:40:29.469052] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:15:05.418 [2024-06-10 10:40:29.478633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.418 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.418 [2024-06-10 10:40:29.529396] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:15:05.418 [2024-06-10 10:40:29.540237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.418 [2024-06-10 10:40:29.569350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.418 [2024-06-10 10:40:29.592401] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:15:05.418 [2024-06-10 10:40:29.619683] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:15:05.679 Running I/O for 1 seconds... 00:15:05.679 Running I/O for 1 seconds... 00:15:05.679 Running I/O for 1 seconds... 00:15:05.679 Running I/O for 1 seconds... 00:15:06.620 00:15:06.620 Latency(us) 00:15:06.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.620 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:06.620 Nvme1n1 : 1.00 18687.99 73.00 0.00 0.00 6831.43 4014.08 13489.49 00:15:06.620 =================================================================================================================== 00:15:06.620 Total : 18687.99 73.00 0.00 0.00 6831.43 4014.08 13489.49 00:15:06.620 00:15:06.621 Latency(us) 00:15:06.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.621 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:06.621 Nvme1n1 : 1.00 184321.15 720.00 0.00 0.00 691.64 276.48 1249.28 00:15:06.621 =================================================================================================================== 00:15:06.621 Total : 184321.15 720.00 0.00 0.00 691.64 276.48 1249.28 00:15:06.621 00:15:06.621 Latency(us) 00:15:06.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.621 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:06.621 Nvme1n1 : 1.01 11890.38 46.45 0.00 0.00 10721.86 6744.75 18240.85 00:15:06.621 =================================================================================================================== 00:15:06.621 Total : 11890.38 46.45 0.00 0.00 10721.86 6744.75 18240.85 00:15:06.621 00:15:06.621 Latency(us) 00:15:06.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.621 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:06.621 Nvme1n1 : 1.00 12808.08 50.03 0.00 0.00 9965.67 4450.99 23811.41 00:15:06.621 =================================================================================================================== 00:15:06.621 Total : 12808.08 50.03 0.00 0.00 9965.67 4450.99 23811.41 00:15:06.881 10:40:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 783157 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 783159 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 783162 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.881 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.881 rmmod nvme_tcp 00:15:06.881 rmmod nvme_fabrics 00:15:06.881 rmmod nvme_keyring 00:15:07.141 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.141 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 782815 ']' 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 782815 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 782815 ']' 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 782815 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 782815 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 782815' 00:15:07.142 killing process with pid 782815 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 782815 00:15:07.142 [2024-06-10 10:40:31.233936] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 782815 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.142 10:40:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.705 10:40:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.705 00:15:09.705 real 0m12.647s 00:15:09.705 user 0m19.290s 00:15:09.705 sys 0m6.962s 00:15:09.705 10:40:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:09.705 10:40:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:09.705 ************************************ 00:15:09.705 END TEST nvmf_bdev_io_wait 00:15:09.705 ************************************ 00:15:09.705 10:40:33 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:09.705 10:40:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:09.705 10:40:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:09.705 10:40:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.705 ************************************ 00:15:09.705 START TEST nvmf_queue_depth 00:15:09.705 ************************************ 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:09.705 * Looking for test storage... 00:15:09.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.705 10:40:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.706 10:40:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:16.430 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:16.430 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:16.430 Found net devices under 0000:31:00.0: cvl_0_0 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.430 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:16.431 Found net devices under 0000:31:00.1: cvl_0_1 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:16.431 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:16.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:15:16.692 00:15:16.692 --- 10.0.0.2 ping statistics --- 00:15:16.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.692 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:16.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:15:16.692 00:15:16.692 --- 10.0.0.1 ping statistics --- 00:15:16.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.692 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:16.692 10:40:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:16.953 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=787903 00:15:16.953 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 787903 00:15:16.953 10:40:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:16.953 10:40:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 787903 ']' 00:15:16.953 10:40:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.953 10:40:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:16.953 10:40:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.953 10:40:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:16.953 10:40:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:16.953 [2024-06-10 10:40:41.035389] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:15:16.953 [2024-06-10 10:40:41.035451] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.953 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.953 [2024-06-10 10:40:41.126073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.953 [2024-06-10 10:40:41.218455] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.953 [2024-06-10 10:40:41.218511] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.953 [2024-06-10 10:40:41.218519] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.953 [2024-06-10 10:40:41.218526] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.953 [2024-06-10 10:40:41.218532] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.953 [2024-06-10 10:40:41.218556] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:17.896 [2024-06-10 10:40:41.869584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:17.896 Malloc0 00:15:17.896 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:17.897 [2024-06-10 10:40:41.939752] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:17.897 [2024-06-10 10:40:41.940046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=787938 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 787938 /var/tmp/bdevperf.sock 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 787938 ']' 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:17.897 10:40:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:17.897 [2024-06-10 10:40:41.995965] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:15:17.897 [2024-06-10 10:40:41.996031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787938 ] 00:15:17.897 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.897 [2024-06-10 10:40:42.061565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.897 [2024-06-10 10:40:42.138601] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.839 10:40:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:18.839 10:40:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:15:18.839 10:40:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.839 10:40:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:18.839 10:40:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:18.839 NVMe0n1 00:15:18.839 10:40:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:18.839 10:40:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:18.839 Running I/O for 10 seconds... 00:15:28.839 00:15:28.839 Latency(us) 00:15:28.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.839 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:28.839 Verification LBA range: start 0x0 length 0x4000 00:15:28.839 NVMe0n1 : 10.05 11236.57 43.89 0.00 0.00 90775.57 10540.37 73837.23 00:15:28.839 =================================================================================================================== 00:15:28.839 Total : 11236.57 43.89 0.00 0.00 90775.57 10540.37 73837.23 00:15:28.839 0 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 787938 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 787938 ']' 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 787938 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 787938 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 787938' 00:15:28.839 killing process with pid 787938 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 787938 00:15:28.839 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.839 00:15:28.839 Latency(us) 00:15:28.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.839 =================================================================================================================== 00:15:28.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.839 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 787938 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.100 rmmod nvme_tcp 00:15:29.100 rmmod nvme_fabrics 00:15:29.100 rmmod nvme_keyring 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 787903 ']' 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 787903 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 787903 ']' 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 787903 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 787903 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 787903' 00:15:29.100 killing process with pid 787903 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 787903 00:15:29.100 [2024-06-10 10:40:53.325341] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:29.100 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 787903 00:15:29.361 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.361 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.361 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.361 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.361 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.361 10:40:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.361 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.361 10:40:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.275 10:40:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:31.275 00:15:31.275 real 0m22.003s 00:15:31.275 user 0m25.445s 00:15:31.275 sys 0m6.560s 00:15:31.275 10:40:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:31.275 10:40:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:31.275 ************************************ 00:15:31.275 END TEST nvmf_queue_depth 00:15:31.275 ************************************ 00:15:31.275 10:40:55 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:31.275 10:40:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:31.275 10:40:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:31.275 10:40:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.537 ************************************ 00:15:31.537 START TEST nvmf_target_multipath 00:15:31.537 ************************************ 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:31.537 * Looking for test storage... 00:15:31.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.537 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:31.538 10:40:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:39.705 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:39.706 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:39.706 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:39.706 Found net devices under 0000:31:00.0: cvl_0_0 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:39.706 Found net devices under 0000:31:00.1: cvl_0_1 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:39.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:15:39.706 00:15:39.706 --- 10.0.0.2 ping statistics --- 00:15:39.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.706 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:15:39.706 00:15:39.706 --- 10.0.0.1 ping statistics --- 00:15:39.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.706 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:39.706 only one NIC for nvmf test 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.706 rmmod nvme_tcp 00:15:39.706 rmmod nvme_fabrics 00:15:39.706 rmmod nvme_keyring 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.706 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.707 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.707 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.707 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.707 10:41:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.707 10:41:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.707 10:41:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.096 00:15:41.096 real 0m9.446s 00:15:41.096 user 0m1.966s 00:15:41.096 sys 0m5.338s 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:41.096 10:41:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:41.096 ************************************ 00:15:41.096 END TEST nvmf_target_multipath 00:15:41.096 ************************************ 00:15:41.096 10:41:05 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:41.096 10:41:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:41.096 10:41:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:41.096 10:41:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:41.096 ************************************ 00:15:41.096 START TEST nvmf_zcopy 00:15:41.096 ************************************ 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:41.096 * Looking for test storage... 00:15:41.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.096 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.097 10:41:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:49.265 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:49.265 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:49.265 Found net devices under 0000:31:00.0: cvl_0_0 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:49.265 Found net devices under 0000:31:00.1: cvl_0_1 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:49.265 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:15:49.266 00:15:49.266 --- 10.0.0.2 ping statistics --- 00:15:49.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.266 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:15:49.266 00:15:49.266 --- 10.0.0.1 ping statistics --- 00:15:49.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.266 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=798709 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 798709 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 798709 ']' 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:49.266 10:41:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.266 [2024-06-10 10:41:12.621382] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:15:49.266 [2024-06-10 10:41:12.621430] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.266 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.266 [2024-06-10 10:41:12.704573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.266 [2024-06-10 10:41:12.778005] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.266 [2024-06-10 10:41:12.778057] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.266 [2024-06-10 10:41:12.778066] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.266 [2024-06-10 10:41:12.778073] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.266 [2024-06-10 10:41:12.778078] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.266 [2024-06-10 10:41:12.778102] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.266 [2024-06-10 10:41:13.445093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.266 [2024-06-10 10:41:13.469090] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:49.266 [2024-06-10 10:41:13.469405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.266 malloc0 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:49.266 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:49.267 { 00:15:49.267 "params": { 00:15:49.267 "name": "Nvme$subsystem", 00:15:49.267 "trtype": "$TEST_TRANSPORT", 00:15:49.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:49.267 "adrfam": "ipv4", 00:15:49.267 "trsvcid": "$NVMF_PORT", 00:15:49.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:49.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:49.267 "hdgst": ${hdgst:-false}, 00:15:49.267 "ddgst": ${ddgst:-false} 00:15:49.267 }, 00:15:49.267 "method": "bdev_nvme_attach_controller" 00:15:49.267 } 00:15:49.267 EOF 00:15:49.267 )") 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:49.267 10:41:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:49.267 "params": { 00:15:49.267 "name": "Nvme1", 00:15:49.267 "trtype": "tcp", 00:15:49.267 "traddr": "10.0.0.2", 00:15:49.267 "adrfam": "ipv4", 00:15:49.267 "trsvcid": "4420", 00:15:49.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:49.267 "hdgst": false, 00:15:49.267 "ddgst": false 00:15:49.267 }, 00:15:49.267 "method": "bdev_nvme_attach_controller" 00:15:49.267 }' 00:15:49.528 [2024-06-10 10:41:13.576191] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:15:49.528 [2024-06-10 10:41:13.576268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798748 ] 00:15:49.528 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.528 [2024-06-10 10:41:13.643406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.529 [2024-06-10 10:41:13.719361] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.790 Running I/O for 10 seconds... 00:15:59.827 00:15:59.827 Latency(us) 00:15:59.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.827 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:59.827 Verification LBA range: start 0x0 length 0x1000 00:15:59.827 Nvme1n1 : 10.01 8606.89 67.24 0.00 0.00 14819.00 2239.15 26978.99 00:15:59.827 =================================================================================================================== 00:15:59.828 Total : 8606.89 67.24 0.00 0.00 14819.00 2239.15 26978.99 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=800806 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:59.828 { 00:15:59.828 "params": { 00:15:59.828 "name": "Nvme$subsystem", 00:15:59.828 "trtype": "$TEST_TRANSPORT", 00:15:59.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:59.828 "adrfam": "ipv4", 00:15:59.828 "trsvcid": "$NVMF_PORT", 00:15:59.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:59.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:59.828 "hdgst": ${hdgst:-false}, 00:15:59.828 "ddgst": ${ddgst:-false} 00:15:59.828 }, 00:15:59.828 "method": "bdev_nvme_attach_controller" 00:15:59.828 } 00:15:59.828 EOF 00:15:59.828 )") 00:15:59.828 [2024-06-10 10:41:24.081583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.828 [2024-06-10 10:41:24.081613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:59.828 10:41:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:59.828 "params": { 00:15:59.828 "name": "Nvme1", 00:15:59.828 "trtype": "tcp", 00:15:59.828 "traddr": "10.0.0.2", 00:15:59.828 "adrfam": "ipv4", 00:15:59.828 "trsvcid": "4420", 00:15:59.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.828 "hdgst": false, 00:15:59.828 "ddgst": false 00:15:59.828 }, 00:15:59.828 "method": "bdev_nvme_attach_controller" 00:15:59.828 }' 00:15:59.828 [2024-06-10 10:41:24.093575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.828 [2024-06-10 10:41:24.093585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.828 [2024-06-10 10:41:24.105604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.828 [2024-06-10 10:41:24.105612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.088 [2024-06-10 10:41:24.117636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.088 [2024-06-10 10:41:24.117644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.129665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.129674] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.130488] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:16:00.089 [2024-06-10 10:41:24.130537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800806 ] 00:16:00.089 [2024-06-10 10:41:24.141695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.141703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.153725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.153733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.089 [2024-06-10 10:41:24.165756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.165764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.177786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.177794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.189414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.089 [2024-06-10 10:41:24.189817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.189825] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.201849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.201858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.213879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.213888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.225910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.225921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.237939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.237950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.249969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.249978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.253779] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.089 [2024-06-10 10:41:24.261997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.262005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.274036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.274049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.286062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.286072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.298091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.298100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.310120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.310130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.322159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.322173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.334196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.334209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.346215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.346226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.358249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.358260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.089 [2024-06-10 10:41:24.370279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.089 [2024-06-10 10:41:24.370291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.382308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.382317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.394342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.394353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.406372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.406383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.418401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.418409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.430433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.430442] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.442466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.442476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.454498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.454509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.466535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.466544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.478560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.478569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.490594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.490605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.502624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.502632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.514656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.514664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.526685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.526693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.538715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.538724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.550756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.550772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 Running I/O for 5 seconds... 00:16:00.350 [2024-06-10 10:41:24.562786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.562798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.577770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.577787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.590280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.590297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.603442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.603462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.616366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.616382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.350 [2024-06-10 10:41:24.629550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.350 [2024-06-10 10:41:24.629566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.611 [2024-06-10 10:41:24.642439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.642455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.655644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.655659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.668946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.668963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.681946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.681961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.694767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.694782] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.708074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.708090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.720734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.720749] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.734225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.734241] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.747106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.747121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.760215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.760230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.773256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.773271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.786407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.786422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.799232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.799251] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.812556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.812571] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.824918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.824933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.838469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.838483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.851501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.851520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.863986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.864001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.876711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.876726] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.612 [2024-06-10 10:41:24.889206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.612 [2024-06-10 10:41:24.889222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:24.902356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:24.902371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:24.915152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:24.915167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:24.928654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:24.928669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:24.941879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:24.941893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:24.955111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:24.955126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:24.968368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:24.968384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:24.981665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:24.981681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:24.994662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:24.994677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.007496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.007511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.020485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.020501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.033238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.033258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.046111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.046126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.059272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.059287] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.072573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.072589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.085800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.085815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.098588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.098603] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.111636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.111651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.123918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.123933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.137586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.137601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.874 [2024-06-10 10:41:25.150383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.874 [2024-06-10 10:41:25.150398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.134 [2024-06-10 10:41:25.163358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.163373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.176448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.176463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.189343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.189358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.201873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.201887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.214901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.214916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.228183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.228198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.241419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.241434] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.254674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.254689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.268157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.268172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.281768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.281783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.295104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.295119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.308089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.308103] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.320630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.320645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.333561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.333576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.346842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.346857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.360124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.360139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.373440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.373455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.386631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.386646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.400205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.400221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.135 [2024-06-10 10:41:25.413005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.135 [2024-06-10 10:41:25.413020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.426085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.426101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.439108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.439123] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.452349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.452364] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.465701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.465717] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.479089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.479104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.492402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.492417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.505831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.505846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.519037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.519051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.531950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.531965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.544318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.544333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.557395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.557410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.570763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.570778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.583909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.583924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.597141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.597156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.610115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.610130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.622637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.622652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.636140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.636155] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.649336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.649352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.662554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.662569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.396 [2024-06-10 10:41:25.675771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.396 [2024-06-10 10:41:25.675787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.689126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.689141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.702554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.702570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.715374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.715390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.728835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.728850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.741394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.741410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.754345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.754361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.766836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.766851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.779980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.779996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.793336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.793352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.806593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.806608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.819651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.819666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.832849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.832864] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.846301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.846316] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.859630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.859646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.872270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.872285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.885507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.885522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.898257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.898273] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.911248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.911264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.924374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.924389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.658 [2024-06-10 10:41:25.937774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.658 [2024-06-10 10:41:25.937789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:25.951000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:25.951015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:25.963929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:25.963944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:25.976791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:25.976806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:25.989870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:25.989886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.003278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.003294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.016598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.016613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.028753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.028768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.042161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.042177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.055634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.055649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.067869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.067884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.081442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.081460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.094723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.094738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.107903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.107918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.121229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.121249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.134209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.134224] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.147480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.147496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.160803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.160820] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.173727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.173742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.918 [2024-06-10 10:41:26.187161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.918 [2024-06-10 10:41:26.187177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.919 [2024-06-10 10:41:26.200416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.919 [2024-06-10 10:41:26.200431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.179 [2024-06-10 10:41:26.213312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.179 [2024-06-10 10:41:26.213327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.179 [2024-06-10 10:41:26.226114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.179 [2024-06-10 10:41:26.226130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.179 [2024-06-10 10:41:26.238922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.238938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.252267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.252283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.265787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.265803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.279046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.279062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.292270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.292286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.305490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.305506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.319004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.319020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.331575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.331594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.343655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.343670] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.356634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.356649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.370192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.370206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.383262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.383277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.396454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.396468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.410077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.410092] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.422593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.422608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.435078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.435093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.448423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.448438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.180 [2024-06-10 10:41:26.461904] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.180 [2024-06-10 10:41:26.461919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.475456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.475471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.488783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.488798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.502130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.502145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.515455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.515470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.529071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.529086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.541495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.541510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.554977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.554992] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.567949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.567964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.581590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.581608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.595127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.595143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.608025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.608040] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.621191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.621205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.634672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.634687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.647696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.647710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.661091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.661106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.674294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.674309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.687231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.687249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.700340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.700354] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.713571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.713586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.440 [2024-06-10 10:41:26.726496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.440 [2024-06-10 10:41:26.726510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.739752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.739767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.752495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.752510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.765455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.765470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.779035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.779049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.792394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.792409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.805449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.805463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.818720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.818735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.832024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.832043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.845029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.845044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.858174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.858189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.871523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.871538] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.884510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.884526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.897788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.897803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.911051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.911066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.923614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.923628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.936722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.936736] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.950075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.950090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.963043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.963058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.701 [2024-06-10 10:41:26.976234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.701 [2024-06-10 10:41:26.976253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:26.989442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:26.989457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.002673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.002687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.015568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.015584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.028678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.028693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.041674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.041689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.054778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.054793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.068022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.068037] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.081129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.081144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.093999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.094014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.107580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.107595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.119932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.119948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.133261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.133276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.146069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.146084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.159379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.159393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.172559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.961 [2024-06-10 10:41:27.172574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.961 [2024-06-10 10:41:27.185447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.962 [2024-06-10 10:41:27.185462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.962 [2024-06-10 10:41:27.198752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.962 [2024-06-10 10:41:27.198766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.962 [2024-06-10 10:41:27.211665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.962 [2024-06-10 10:41:27.211680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.962 [2024-06-10 10:41:27.224608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.962 [2024-06-10 10:41:27.224623] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.962 [2024-06-10 10:41:27.238030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.962 [2024-06-10 10:41:27.238044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.222 [2024-06-10 10:41:27.251155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.222 [2024-06-10 10:41:27.251170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.264045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.264060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.277037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.277052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.289519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.289534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.302498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.302513] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.315475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.315490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.328421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.328436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.341955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.341971] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.355058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.355073] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.367525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.367540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.379963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.379978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.392918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.392934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.405758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.405773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.419057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.419074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.432158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.432173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.445411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.445426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.458798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.458814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.471615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.471630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.484618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.484633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.497394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.497410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.223 [2024-06-10 10:41:27.510397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.223 [2024-06-10 10:41:27.510412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.522870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.522885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.535060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.535075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.547419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.547435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.560459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.560475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.573519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.573534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.586603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.586618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.599670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.599686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.612531] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.612546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.625638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.625654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.638844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.638860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.651930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.651946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.665144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.665159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.678285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.678300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.691368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.691383] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.704718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.704733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.718255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.718271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.731781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.731797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.745139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.745154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.484 [2024-06-10 10:41:27.758920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.484 [2024-06-10 10:41:27.758936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.772204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.772220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.784683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.784698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.797859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.797875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.811001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.811016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.824396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.824412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.837133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.837148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.849944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.849959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.863126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.863142] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.876681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.876697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.889944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.889959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.902515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.902531] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.915785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.915801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.929290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.929305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.942315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.942331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.955845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.955861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.968377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.968393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.980849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.980864] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:27.993268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:27.993284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:28.006651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:28.006667] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:28.019817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:28.019831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.745 [2024-06-10 10:41:28.032740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:03.745 [2024-06-10 10:41:28.032756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.039273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.039289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.049165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.049183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.057837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.057852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.066031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.066046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.074311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.074326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.083288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.083302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.092230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.092248] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.100565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.100579] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.108902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.108917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.117446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.117460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.126504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.126519] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.135520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.135535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.143973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.143987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.152333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.152348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.161400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.161415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.169751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.169765] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.178624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.178638] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.187118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.187132] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.195563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.195577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.204649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.204664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.213589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.213607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.222022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.222037] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.231052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.231067] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.238857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.238871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.247577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.247591] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.256109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.256123] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.264758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.264773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.273053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.273067] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.281889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.281903] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.007 [2024-06-10 10:41:28.290449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.007 [2024-06-10 10:41:28.290463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.298952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.298966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.307995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.308010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.316160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.316174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.324754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.324769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.333620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.333634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.342330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.342344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.351057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.351072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.359831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.359846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.368619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.368634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.376714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.376731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.385622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.385637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.393949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.393964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.402673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.402688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.411129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.411144] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.420435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.420450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.428391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.428406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.437547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.437562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.446073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.446087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.454871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.454885] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.463661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.463676] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.472442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.472456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.481247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.481262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.489770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.489785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.498783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.498797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.507104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.507119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.515774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.515788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.524596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.524610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.533121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.533135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.541310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.541328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.269 [2024-06-10 10:41:28.550241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.269 [2024-06-10 10:41:28.550259] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.530 [2024-06-10 10:41:28.559273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.530 [2024-06-10 10:41:28.559287] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.530 [2024-06-10 10:41:28.567974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.530 [2024-06-10 10:41:28.567988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.576386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.576400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.585248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.585263] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.594219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.594233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.602856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.602870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.611486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.611501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.620491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.620505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.628901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.628916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.637566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.637580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.646470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.646485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.655545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.655559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.663770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.663785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.672283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.672298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.680968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.680982] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.689293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.689308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.698084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.698099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.706875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.706890] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.715519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.715534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.723752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.723767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.732237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.732256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.740918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.740932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.749433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.749448] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.758006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.758021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.766222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.766236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.774946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.774961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.783038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.783053] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.791835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.791850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.800440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.800455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.808774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.808788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.531 [2024-06-10 10:41:28.817252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.531 [2024-06-10 10:41:28.817267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.792 [2024-06-10 10:41:28.825736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.792 [2024-06-10 10:41:28.825751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.834766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.834781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.842928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.842944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.852124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.852139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.860588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.860603] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.869386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.869400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.878378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.878393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.886941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.886955] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.895599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.895613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.904617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.904631] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.912969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.912984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.922087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.922101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.930346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.930361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.939541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.939555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.948567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.948582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.957273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.957288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.965947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.965961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.974302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.974317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.983239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.983257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:28.992225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:28.992240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.000864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.000878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.009847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.009863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.018311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.018325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.027010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.027025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.035696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.035710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.044647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.044662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.053313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.053328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.062096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.062111] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.070948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.070962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:04.793 [2024-06-10 10:41:29.079960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:04.793 [2024-06-10 10:41:29.079975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.088895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.088911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.097178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.097194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.105796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.105811] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.114722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.114737] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.123184] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.123199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.131735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.131751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.140113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.140128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.148677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.148691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.157415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.157431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.166087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.166102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.175013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.175028] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.183786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.183801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.192670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.192685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.201447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.054 [2024-06-10 10:41:29.201461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.054 [2024-06-10 10:41:29.210391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.210406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.219461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.219476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.227922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.227938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.236785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.236801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.245820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.245835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.254280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.254295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.263190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.263205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.271825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.271840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.280769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.280784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.289333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.289348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.302353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.302369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.310264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.310279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.318785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.318800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.327624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.327640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.055 [2024-06-10 10:41:29.336547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.055 [2024-06-10 10:41:29.336561] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.345291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.345306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.353491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.353506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.362463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.362482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.370235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.370255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.379597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.379612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.388053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.388068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.396535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.396550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.405662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.405677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.413423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.413438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.422538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.422553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.431489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.431504] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.440175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.440190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.448869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.448884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.457744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.457759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.466320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.466335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.474951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.474966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.483617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.483632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.492482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.492496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.501447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.501462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.510028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.510043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.518763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.518779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.527400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.527419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.536508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.536522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.544786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.544800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.553620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.553635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.562297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.562311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.570453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.570468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 00:16:05.316 Latency(us) 00:16:05.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.316 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:05.316 Nvme1n1 : 5.01 19504.79 152.38 0.00 0.00 6555.69 2484.91 16056.32 00:16:05.316 =================================================================================================================== 00:16:05.316 Total : 19504.79 152.38 0.00 0.00 6555.69 2484.91 16056.32 00:16:05.316 [2024-06-10 10:41:29.576778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.576792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.584796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.316 [2024-06-10 10:41:29.584808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.316 [2024-06-10 10:41:29.592816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.317 [2024-06-10 10:41:29.592828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.317 [2024-06-10 10:41:29.600838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.317 [2024-06-10 10:41:29.600849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.608858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.608869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.616877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.616886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.624895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.624904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.632916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.632925] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.640935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.640943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.648955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.648964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.656975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.656989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.664996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.665006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.673018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.673029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.681036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.681046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.689058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.689070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.697077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.697085] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 [2024-06-10 10:41:29.705099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.578 [2024-06-10 10:41:29.705108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (800806) - No such process 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 800806 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:05.578 delay0 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.578 10:41:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:05.578 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.578 [2024-06-10 10:41:29.838801] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:12.245 Initializing NVMe Controllers 00:16:12.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:12.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:12.246 Initialization complete. Launching workers. 00:16:12.246 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 372 00:16:12.246 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 659, failed to submit 33 00:16:12.246 success 498, unsuccess 161, failed 0 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.246 rmmod nvme_tcp 00:16:12.246 rmmod nvme_fabrics 00:16:12.246 rmmod nvme_keyring 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 798709 ']' 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 798709 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 798709 ']' 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 798709 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 798709 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 798709' 00:16:12.246 killing process with pid 798709 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 798709 00:16:12.246 [2024-06-10 10:41:36.143349] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 798709 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.246 10:41:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.158 10:41:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.158 00:16:14.158 real 0m33.199s 00:16:14.158 user 0m44.877s 00:16:14.158 sys 0m10.255s 00:16:14.158 10:41:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:14.158 10:41:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:14.158 ************************************ 00:16:14.158 END TEST nvmf_zcopy 00:16:14.158 ************************************ 00:16:14.158 10:41:38 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:14.158 10:41:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:14.158 10:41:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:14.158 10:41:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.158 ************************************ 00:16:14.158 START TEST nvmf_nmic 00:16:14.158 ************************************ 00:16:14.158 10:41:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:14.418 * Looking for test storage... 00:16:14.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.418 10:41:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:14.419 10:41:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.558 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:22.559 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:22.559 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:22.559 Found net devices under 0000:31:00.0: cvl_0_0 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:22.559 Found net devices under 0000:31:00.1: cvl_0_1 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:22.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:16:22.559 00:16:22.559 --- 10.0.0.2 ping statistics --- 00:16:22.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.559 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:16:22.559 00:16:22.559 --- 10.0.0.1 ping statistics --- 00:16:22.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.559 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=807481 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 807481 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 807481 ']' 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:22.559 10:41:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.559 [2024-06-10 10:41:45.862891] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:16:22.559 [2024-06-10 10:41:45.862960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.559 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.559 [2024-06-10 10:41:45.935992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.559 [2024-06-10 10:41:46.012062] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.559 [2024-06-10 10:41:46.012100] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.559 [2024-06-10 10:41:46.012108] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.559 [2024-06-10 10:41:46.012114] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.560 [2024-06-10 10:41:46.012120] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.560 [2024-06-10 10:41:46.012278] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.560 [2024-06-10 10:41:46.012353] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.560 [2024-06-10 10:41:46.012518] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.560 [2024-06-10 10:41:46.012519] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 [2024-06-10 10:41:46.693800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 Malloc0 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 [2024-06-10 10:41:46.753027] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:22.560 [2024-06-10 10:41:46.753258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:22.560 test case1: single bdev can't be used in multiple subsystems 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 [2024-06-10 10:41:46.789176] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:22.560 [2024-06-10 10:41:46.789193] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:22.560 [2024-06-10 10:41:46.789201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:22.560 request: 00:16:22.560 { 00:16:22.560 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:22.560 "namespace": { 00:16:22.560 "bdev_name": "Malloc0", 00:16:22.560 "no_auto_visible": false 00:16:22.560 }, 00:16:22.560 "method": "nvmf_subsystem_add_ns", 00:16:22.560 "req_id": 1 00:16:22.560 } 00:16:22.560 Got JSON-RPC error response 00:16:22.560 response: 00:16:22.560 { 00:16:22.560 "code": -32602, 00:16:22.560 "message": "Invalid parameters" 00:16:22.560 } 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:22.560 Adding namespace failed - expected result. 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:22.560 test case2: host connect to nvmf target in multiple paths 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:22.560 [2024-06-10 10:41:46.801308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:22.560 10:41:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:24.471 10:41:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:25.853 10:41:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.853 10:41:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:16:25.853 10:41:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.853 10:41:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:25.853 10:41:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:16:27.758 10:41:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:27.758 10:41:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:27.758 10:41:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.758 10:41:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:27.758 10:41:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.758 10:41:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:16:27.759 10:41:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:27.759 [global] 00:16:27.759 thread=1 00:16:27.759 invalidate=1 00:16:27.759 rw=write 00:16:27.759 time_based=1 00:16:27.759 runtime=1 00:16:27.759 ioengine=libaio 00:16:27.759 direct=1 00:16:27.759 bs=4096 00:16:27.759 iodepth=1 00:16:27.759 norandommap=0 00:16:27.759 numjobs=1 00:16:27.759 00:16:27.759 verify_dump=1 00:16:27.759 verify_backlog=512 00:16:27.759 verify_state_save=0 00:16:27.759 do_verify=1 00:16:27.759 verify=crc32c-intel 00:16:27.759 [job0] 00:16:27.759 filename=/dev/nvme0n1 00:16:27.759 Could not set queue depth (nvme0n1) 00:16:28.039 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:28.039 fio-3.35 00:16:28.039 Starting 1 thread 00:16:29.421 00:16:29.421 job0: (groupid=0, jobs=1): err= 0: pid=808919: Mon Jun 10 10:41:53 2024 00:16:29.422 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:29.422 slat (nsec): min=24268, max=59646, avg=25384.71, stdev=3499.16 00:16:29.422 clat (usec): min=881, max=1350, avg=1094.75, stdev=55.12 00:16:29.422 lat (usec): min=906, max=1375, avg=1120.14, stdev=55.11 00:16:29.422 clat percentiles (usec): 00:16:29.422 | 1.00th=[ 938], 5.00th=[ 1004], 10.00th=[ 1020], 20.00th=[ 1057], 00:16:29.422 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1106], 00:16:29.422 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1156], 95.00th=[ 1188], 00:16:29.422 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1352], 99.95th=[ 1352], 00:16:29.422 | 99.99th=[ 1352] 00:16:29.422 write: IOPS=560, BW=2242KiB/s (2296kB/s)(2244KiB/1001msec); 0 zone resets 00:16:29.422 slat (usec): min=9, max=28315, avg=78.24, stdev=1194.34 00:16:29.422 clat (usec): min=410, max=881, avg=666.86, stdev=93.95 00:16:29.422 lat (usec): min=420, max=29114, avg=745.10, stdev=1204.01 00:16:29.422 clat percentiles (usec): 00:16:29.422 | 1.00th=[ 445], 5.00th=[ 465], 10.00th=[ 537], 20.00th=[ 578], 00:16:29.422 | 30.00th=[ 635], 40.00th=[ 660], 50.00th=[ 676], 60.00th=[ 709], 00:16:29.422 | 70.00th=[ 734], 80.00th=[ 742], 90.00th=[ 766], 95.00th=[ 807], 00:16:29.422 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 881], 99.95th=[ 881], 00:16:29.422 | 99.99th=[ 881] 00:16:29.422 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:29.422 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:29.422 lat (usec) : 500=3.54%, 750=40.35%, 1000=10.34% 00:16:29.422 lat (msec) : 2=45.76% 00:16:29.422 cpu : usr=1.70%, sys=2.80%, ctx=1077, majf=0, minf=1 00:16:29.422 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.422 issued rwts: total=512,561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.422 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.422 00:16:29.422 Run status group 0 (all jobs): 00:16:29.422 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:29.422 WRITE: bw=2242KiB/s (2296kB/s), 2242KiB/s-2242KiB/s (2296kB/s-2296kB/s), io=2244KiB (2298kB), run=1001-1001msec 00:16:29.422 00:16:29.422 Disk stats (read/write): 00:16:29.422 nvme0n1: ios=477/512, merge=0/0, ticks=1458/316, in_queue=1774, util=99.00% 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:29.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.422 rmmod nvme_tcp 00:16:29.422 rmmod nvme_fabrics 00:16:29.422 rmmod nvme_keyring 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 807481 ']' 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 807481 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 807481 ']' 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 807481 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 807481 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 807481' 00:16:29.422 killing process with pid 807481 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 807481 00:16:29.422 [2024-06-10 10:41:53.635137] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:29.422 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 807481 00:16:29.683 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:29.683 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:29.683 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:29.683 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.683 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.683 10:41:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.683 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.683 10:41:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.597 10:41:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.597 00:16:31.597 real 0m17.449s 00:16:31.597 user 0m44.959s 00:16:31.597 sys 0m6.409s 00:16:31.597 10:41:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:31.597 10:41:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:31.597 ************************************ 00:16:31.597 END TEST nvmf_nmic 00:16:31.597 ************************************ 00:16:31.859 10:41:55 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:31.859 10:41:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:31.859 10:41:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:31.859 10:41:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:31.859 ************************************ 00:16:31.859 START TEST nvmf_fio_target 00:16:31.859 ************************************ 00:16:31.859 10:41:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:31.859 * Looking for test storage... 00:16:31.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.859 10:41:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.005 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:40.006 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:40.006 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:40.006 Found net devices under 0000:31:00.0: cvl_0_0 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:40.006 Found net devices under 0000:31:00.1: cvl_0_1 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:16:40.006 00:16:40.006 --- 10.0.0.2 ping statistics --- 00:16:40.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.006 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:16:40.006 00:16:40.006 --- 10.0.0.1 ping statistics --- 00:16:40.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.006 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=813528 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 813528 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 813528 ']' 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:40.006 10:42:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.006 [2024-06-10 10:42:03.521536] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:16:40.006 [2024-06-10 10:42:03.521583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.006 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.006 [2024-06-10 10:42:03.588752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.006 [2024-06-10 10:42:03.653779] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.006 [2024-06-10 10:42:03.653815] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.006 [2024-06-10 10:42:03.653823] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.006 [2024-06-10 10:42:03.653830] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.006 [2024-06-10 10:42:03.653835] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.006 [2024-06-10 10:42:03.653978] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.006 [2024-06-10 10:42:03.654093] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.006 [2024-06-10 10:42:03.654253] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.006 [2024-06-10 10:42:03.654274] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.006 10:42:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:40.006 10:42:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:16:40.006 10:42:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.006 10:42:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:40.268 10:42:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.268 10:42:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.268 10:42:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:40.268 [2024-06-10 10:42:04.469304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.268 10:42:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.528 10:42:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:40.528 10:42:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.788 10:42:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:40.788 10:42:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.788 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:40.788 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:41.047 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:41.047 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:41.307 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:41.307 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:41.307 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:41.568 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:41.568 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:41.830 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:41.830 10:42:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:41.830 10:42:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:42.091 10:42:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:42.091 10:42:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.352 10:42:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:42.352 10:42:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:42.352 10:42:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.614 [2024-06-10 10:42:06.731109] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:42.614 [2024-06-10 10:42:06.731379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.614 10:42:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:42.874 10:42:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:42.874 10:42:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.786 10:42:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:44.786 10:42:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:16:44.786 10:42:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.786 10:42:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:16:44.786 10:42:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:16:44.786 10:42:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:16:46.727 10:42:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:46.727 10:42:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:46.727 10:42:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.727 10:42:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:16:46.727 10:42:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.727 10:42:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:16:46.727 10:42:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:46.727 [global] 00:16:46.727 thread=1 00:16:46.727 invalidate=1 00:16:46.727 rw=write 00:16:46.727 time_based=1 00:16:46.727 runtime=1 00:16:46.727 ioengine=libaio 00:16:46.727 direct=1 00:16:46.727 bs=4096 00:16:46.727 iodepth=1 00:16:46.727 norandommap=0 00:16:46.727 numjobs=1 00:16:46.727 00:16:46.727 verify_dump=1 00:16:46.727 verify_backlog=512 00:16:46.727 verify_state_save=0 00:16:46.727 do_verify=1 00:16:46.727 verify=crc32c-intel 00:16:46.727 [job0] 00:16:46.727 filename=/dev/nvme0n1 00:16:46.728 [job1] 00:16:46.728 filename=/dev/nvme0n2 00:16:46.728 [job2] 00:16:46.728 filename=/dev/nvme0n3 00:16:46.728 [job3] 00:16:46.728 filename=/dev/nvme0n4 00:16:46.728 Could not set queue depth (nvme0n1) 00:16:46.728 Could not set queue depth (nvme0n2) 00:16:46.728 Could not set queue depth (nvme0n3) 00:16:46.728 Could not set queue depth (nvme0n4) 00:16:46.988 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.988 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.988 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.988 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.988 fio-3.35 00:16:46.988 Starting 4 threads 00:16:48.400 00:16:48.400 job0: (groupid=0, jobs=1): err= 0: pid=815586: Mon Jun 10 10:42:12 2024 00:16:48.400 read: IOPS=17, BW=71.9KiB/s (73.7kB/s)(72.0KiB/1001msec) 00:16:48.400 slat (nsec): min=26517, max=27716, avg=26852.22, stdev=275.95 00:16:48.400 clat (usec): min=801, max=42198, avg=39684.54, stdev=9704.23 00:16:48.400 lat (usec): min=828, max=42225, avg=39711.39, stdev=9704.26 00:16:48.400 clat percentiles (usec): 00:16:48.400 | 1.00th=[ 799], 5.00th=[ 799], 10.00th=[41681], 20.00th=[41681], 00:16:48.400 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:48.400 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:48.400 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:48.400 | 99.99th=[42206] 00:16:48.400 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:48.400 slat (nsec): min=9443, max=53787, avg=31309.86, stdev=9749.51 00:16:48.400 clat (usec): min=180, max=833, avg=512.34, stdev=122.52 00:16:48.400 lat (usec): min=201, max=867, avg=543.65, stdev=125.46 00:16:48.400 clat percentiles (usec): 00:16:48.400 | 1.00th=[ 243], 5.00th=[ 293], 10.00th=[ 355], 20.00th=[ 408], 00:16:48.400 | 30.00th=[ 449], 40.00th=[ 486], 50.00th=[ 510], 60.00th=[ 545], 00:16:48.400 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 676], 95.00th=[ 709], 00:16:48.400 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 832], 99.95th=[ 832], 00:16:48.400 | 99.99th=[ 832] 00:16:48.400 bw ( KiB/s): min= 4096, max= 4096, per=48.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.400 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.400 lat (usec) : 250=1.13%, 500=43.77%, 750=49.62%, 1000=2.26% 00:16:48.400 lat (msec) : 50=3.21% 00:16:48.400 cpu : usr=1.20%, sys=1.80%, ctx=532, majf=0, minf=1 00:16:48.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.400 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.400 job1: (groupid=0, jobs=1): err= 0: pid=815590: Mon Jun 10 10:42:12 2024 00:16:48.400 read: IOPS=18, BW=74.7KiB/s (76.5kB/s)(76.0KiB/1017msec) 00:16:48.400 slat (nsec): min=25730, max=26375, avg=26029.58, stdev=171.23 00:16:48.400 clat (usec): min=40927, max=42011, avg=41849.28, stdev=321.29 00:16:48.400 lat (usec): min=40953, max=42037, avg=41875.31, stdev=321.23 00:16:48.400 clat percentiles (usec): 00:16:48.400 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:48.400 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:48.400 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:48.400 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:48.400 | 99.99th=[42206] 00:16:48.400 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:16:48.400 slat (nsec): min=9382, max=53044, avg=26228.96, stdev=11411.35 00:16:48.400 clat (usec): min=127, max=1004, avg=391.03, stdev=143.55 00:16:48.400 lat (usec): min=138, max=1039, avg=417.25, stdev=146.89 00:16:48.400 clat percentiles (usec): 00:16:48.401 | 1.00th=[ 145], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 277], 00:16:48.401 | 30.00th=[ 306], 40.00th=[ 351], 50.00th=[ 379], 60.00th=[ 396], 00:16:48.401 | 70.00th=[ 416], 80.00th=[ 449], 90.00th=[ 519], 95.00th=[ 750], 00:16:48.401 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 1004], 99.95th=[ 1004], 00:16:48.401 | 99.99th=[ 1004] 00:16:48.401 bw ( KiB/s): min= 4096, max= 4096, per=48.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.401 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.401 lat (usec) : 250=4.90%, 500=81.17%, 750=5.46%, 1000=4.71% 00:16:48.401 lat (msec) : 2=0.19%, 50=3.58% 00:16:48.401 cpu : usr=0.89%, sys=1.08%, ctx=534, majf=0, minf=1 00:16:48.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.401 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.401 job2: (groupid=0, jobs=1): err= 0: pid=815607: Mon Jun 10 10:42:12 2024 00:16:48.401 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:48.401 slat (nsec): min=7359, max=62153, avg=25436.58, stdev=3563.40 00:16:48.401 clat (usec): min=740, max=1257, avg=1046.16, stdev=72.04 00:16:48.401 lat (usec): min=765, max=1282, avg=1071.59, stdev=72.09 00:16:48.401 clat percentiles (usec): 00:16:48.401 | 1.00th=[ 857], 5.00th=[ 898], 10.00th=[ 955], 20.00th=[ 996], 00:16:48.401 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1057], 00:16:48.401 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1123], 95.00th=[ 1156], 00:16:48.401 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1254], 99.95th=[ 1254], 00:16:48.401 | 99.99th=[ 1254] 00:16:48.401 write: IOPS=622, BW=2490KiB/s (2549kB/s)(2492KiB/1001msec); 0 zone resets 00:16:48.401 slat (nsec): min=9375, max=71096, avg=27557.31, stdev=10279.19 00:16:48.401 clat (usec): min=325, max=1077, avg=683.08, stdev=118.53 00:16:48.401 lat (usec): min=352, max=1110, avg=710.64, stdev=122.83 00:16:48.401 clat percentiles (usec): 00:16:48.401 | 1.00th=[ 404], 5.00th=[ 465], 10.00th=[ 523], 20.00th=[ 586], 00:16:48.401 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 717], 00:16:48.401 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 824], 95.00th=[ 865], 00:16:48.401 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1074], 99.95th=[ 1074], 00:16:48.401 | 99.99th=[ 1074] 00:16:48.401 bw ( KiB/s): min= 4096, max= 4096, per=48.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.401 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.401 lat (usec) : 500=4.14%, 750=35.77%, 1000=24.23% 00:16:48.401 lat (msec) : 2=35.86% 00:16:48.401 cpu : usr=1.40%, sys=3.40%, ctx=1135, majf=0, minf=1 00:16:48.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.401 issued rwts: total=512,623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.401 job3: (groupid=0, jobs=1): err= 0: pid=815616: Mon Jun 10 10:42:12 2024 00:16:48.401 read: IOPS=14, BW=59.3KiB/s (60.8kB/s)(60.0KiB/1011msec) 00:16:48.401 slat (nsec): min=26759, max=27680, avg=27142.07, stdev=221.78 00:16:48.401 clat (usec): min=41217, max=42147, avg=41920.67, stdev=212.51 00:16:48.401 lat (usec): min=41244, max=42174, avg=41947.81, stdev=212.55 00:16:48.401 clat percentiles (usec): 00:16:48.401 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:48.401 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:48.401 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:48.401 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:48.401 | 99.99th=[42206] 00:16:48.401 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:16:48.401 slat (usec): min=9, max=100, avg=33.47, stdev= 9.64 00:16:48.401 clat (usec): min=338, max=1233, avg=696.39, stdev=141.01 00:16:48.401 lat (usec): min=364, max=1289, avg=729.86, stdev=143.76 00:16:48.401 clat percentiles (usec): 00:16:48.401 | 1.00th=[ 371], 5.00th=[ 465], 10.00th=[ 506], 20.00th=[ 570], 00:16:48.401 | 30.00th=[ 611], 40.00th=[ 660], 50.00th=[ 701], 60.00th=[ 742], 00:16:48.401 | 70.00th=[ 791], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 906], 00:16:48.401 | 99.00th=[ 988], 99.50th=[ 1106], 99.90th=[ 1237], 99.95th=[ 1237], 00:16:48.401 | 99.99th=[ 1237] 00:16:48.401 bw ( KiB/s): min= 4096, max= 4096, per=48.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:48.401 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:48.401 lat (usec) : 500=8.35%, 750=52.18%, 1000=35.67% 00:16:48.401 lat (msec) : 2=0.95%, 50=2.85% 00:16:48.401 cpu : usr=1.39%, sys=1.78%, ctx=528, majf=0, minf=1 00:16:48.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.401 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.401 00:16:48.401 Run status group 0 (all jobs): 00:16:48.401 READ: bw=2218KiB/s (2272kB/s), 59.3KiB/s-2046KiB/s (60.8kB/s-2095kB/s), io=2256KiB (2310kB), run=1001-1017msec 00:16:48.401 WRITE: bw=8492KiB/s (8695kB/s), 2014KiB/s-2490KiB/s (2062kB/s-2549kB/s), io=8636KiB (8843kB), run=1001-1017msec 00:16:48.401 00:16:48.401 Disk stats (read/write): 00:16:48.401 nvme0n1: ios=66/512, merge=0/0, ticks=1010/209, in_queue=1219, util=95.89% 00:16:48.401 nvme0n2: ios=40/512, merge=0/0, ticks=1540/193, in_queue=1733, util=97.03% 00:16:48.401 nvme0n3: ios=429/512, merge=0/0, ticks=447/335, in_queue=782, util=88.45% 00:16:48.401 nvme0n4: ios=50/512, merge=0/0, ticks=1256/291, in_queue=1547, util=98.07% 00:16:48.401 10:42:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:48.401 [global] 00:16:48.401 thread=1 00:16:48.401 invalidate=1 00:16:48.401 rw=randwrite 00:16:48.401 time_based=1 00:16:48.401 runtime=1 00:16:48.401 ioengine=libaio 00:16:48.401 direct=1 00:16:48.401 bs=4096 00:16:48.401 iodepth=1 00:16:48.401 norandommap=0 00:16:48.401 numjobs=1 00:16:48.401 00:16:48.401 verify_dump=1 00:16:48.401 verify_backlog=512 00:16:48.401 verify_state_save=0 00:16:48.401 do_verify=1 00:16:48.401 verify=crc32c-intel 00:16:48.401 [job0] 00:16:48.401 filename=/dev/nvme0n1 00:16:48.401 [job1] 00:16:48.401 filename=/dev/nvme0n2 00:16:48.401 [job2] 00:16:48.401 filename=/dev/nvme0n3 00:16:48.401 [job3] 00:16:48.401 filename=/dev/nvme0n4 00:16:48.401 Could not set queue depth (nvme0n1) 00:16:48.401 Could not set queue depth (nvme0n2) 00:16:48.401 Could not set queue depth (nvme0n3) 00:16:48.401 Could not set queue depth (nvme0n4) 00:16:48.665 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.665 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.665 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.665 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.665 fio-3.35 00:16:48.665 Starting 4 threads 00:16:50.080 00:16:50.080 job0: (groupid=0, jobs=1): err= 0: pid=816111: Mon Jun 10 10:42:13 2024 00:16:50.080 read: IOPS=17, BW=70.4KiB/s (72.1kB/s)(72.0KiB/1023msec) 00:16:50.080 slat (nsec): min=24768, max=25835, avg=25095.94, stdev=234.66 00:16:50.080 clat (usec): min=28286, max=41895, avg=40513.13, stdev=3073.03 00:16:50.080 lat (usec): min=28311, max=41921, avg=40538.23, stdev=3073.05 00:16:50.080 clat percentiles (usec): 00:16:50.080 | 1.00th=[28181], 5.00th=[28181], 10.00th=[40633], 20.00th=[41157], 00:16:50.080 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:50.080 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:16:50.080 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:50.080 | 99.99th=[41681] 00:16:50.080 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:16:50.080 slat (nsec): min=8989, max=49992, avg=25921.78, stdev=9557.25 00:16:50.080 clat (usec): min=280, max=739, avg=539.81, stdev=63.44 00:16:50.080 lat (usec): min=290, max=769, avg=565.73, stdev=66.50 00:16:50.080 clat percentiles (usec): 00:16:50.080 | 1.00th=[ 375], 5.00th=[ 429], 10.00th=[ 445], 20.00th=[ 474], 00:16:50.080 | 30.00th=[ 523], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 570], 00:16:50.080 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 619], 00:16:50.080 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 742], 99.95th=[ 742], 00:16:50.080 | 99.99th=[ 742] 00:16:50.080 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.080 lat (usec) : 500=23.96%, 750=72.64% 00:16:50.080 lat (msec) : 50=3.40% 00:16:50.080 cpu : usr=0.98%, sys=1.08%, ctx=530, majf=0, minf=1 00:16:50.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.080 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.081 job1: (groupid=0, jobs=1): err= 0: pid=816118: Mon Jun 10 10:42:13 2024 00:16:50.081 read: IOPS=17, BW=70.0KiB/s (71.7kB/s)(72.0KiB/1028msec) 00:16:50.081 slat (nsec): min=24162, max=25254, avg=24457.33, stdev=271.97 00:16:50.081 clat (usec): min=40722, max=41041, avg=40952.03, stdev=70.87 00:16:50.081 lat (usec): min=40747, max=41066, avg=40976.49, stdev=70.87 00:16:50.081 clat percentiles (usec): 00:16:50.081 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:50.081 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:50.081 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:50.081 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:50.081 | 99.99th=[41157] 00:16:50.081 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:16:50.081 slat (nsec): min=8976, max=50589, avg=28574.02, stdev=7550.01 00:16:50.081 clat (usec): min=198, max=763, avg=529.45, stdev=98.30 00:16:50.081 lat (usec): min=208, max=793, avg=558.03, stdev=100.14 00:16:50.081 clat percentiles (usec): 00:16:50.081 | 1.00th=[ 277], 5.00th=[ 375], 10.00th=[ 404], 20.00th=[ 445], 00:16:50.081 | 30.00th=[ 490], 40.00th=[ 510], 50.00th=[ 529], 60.00th=[ 562], 00:16:50.081 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 652], 95.00th=[ 685], 00:16:50.081 | 99.00th=[ 734], 99.50th=[ 742], 99.90th=[ 766], 99.95th=[ 766], 00:16:50.081 | 99.99th=[ 766] 00:16:50.081 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.081 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.081 lat (usec) : 250=0.19%, 500=33.21%, 750=62.83%, 1000=0.38% 00:16:50.081 lat (msec) : 50=3.40% 00:16:50.081 cpu : usr=0.68%, sys=1.46%, ctx=530, majf=0, minf=1 00:16:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.081 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.081 job2: (groupid=0, jobs=1): err= 0: pid=816126: Mon Jun 10 10:42:13 2024 00:16:50.081 read: IOPS=472, BW=1890KiB/s (1935kB/s)(1892KiB/1001msec) 00:16:50.081 slat (nsec): min=6667, max=59564, avg=26173.39, stdev=4578.86 00:16:50.081 clat (usec): min=863, max=42157, avg=1263.73, stdev=2662.67 00:16:50.081 lat (usec): min=890, max=42181, avg=1289.90, stdev=2662.59 00:16:50.081 clat percentiles (usec): 00:16:50.081 | 1.00th=[ 938], 5.00th=[ 979], 10.00th=[ 1012], 20.00th=[ 1045], 00:16:50.081 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1090], 60.00th=[ 1106], 00:16:50.081 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1156], 95.00th=[ 1172], 00:16:50.081 | 99.00th=[ 1319], 99.50th=[ 1385], 99.90th=[42206], 99.95th=[42206], 00:16:50.081 | 99.99th=[42206] 00:16:50.081 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:50.081 slat (nsec): min=9329, max=65624, avg=27860.85, stdev=9233.24 00:16:50.081 clat (usec): min=436, max=1029, avg=718.96, stdev=98.99 00:16:50.081 lat (usec): min=446, max=1060, avg=746.83, stdev=102.69 00:16:50.081 clat percentiles (usec): 00:16:50.081 | 1.00th=[ 461], 5.00th=[ 553], 10.00th=[ 586], 20.00th=[ 644], 00:16:50.081 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 750], 00:16:50.081 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 848], 95.00th=[ 873], 00:16:50.081 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 1029], 99.95th=[ 1029], 00:16:50.081 | 99.99th=[ 1029] 00:16:50.081 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.081 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.081 lat (usec) : 500=1.42%, 750=29.85%, 1000=23.35% 00:16:50.081 lat (msec) : 2=45.18%, 50=0.20% 00:16:50.081 cpu : usr=2.10%, sys=2.90%, ctx=985, majf=0, minf=1 00:16:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.081 issued rwts: total=473,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.081 job3: (groupid=0, jobs=1): err= 0: pid=816133: Mon Jun 10 10:42:13 2024 00:16:50.081 read: IOPS=37, BW=149KiB/s (153kB/s)(152KiB/1019msec) 00:16:50.081 slat (nsec): min=24798, max=26320, avg=25410.50, stdev=241.49 00:16:50.081 clat (usec): min=708, max=42694, avg=19239.04, stdev=20735.37 00:16:50.081 lat (usec): min=734, max=42719, avg=19264.45, stdev=20735.40 00:16:50.081 clat percentiles (usec): 00:16:50.081 | 1.00th=[ 709], 5.00th=[ 709], 10.00th=[ 775], 20.00th=[ 799], 00:16:50.081 | 30.00th=[ 848], 40.00th=[ 881], 50.00th=[ 898], 60.00th=[41157], 00:16:50.081 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:16:50.081 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:50.081 | 99.99th=[42730] 00:16:50.081 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:16:50.081 slat (nsec): min=9549, max=52673, avg=29094.39, stdev=8583.84 00:16:50.081 clat (usec): min=205, max=827, avg=520.06, stdev=110.74 00:16:50.081 lat (usec): min=215, max=878, avg=549.16, stdev=113.22 00:16:50.081 clat percentiles (usec): 00:16:50.081 | 1.00th=[ 269], 5.00th=[ 338], 10.00th=[ 396], 20.00th=[ 424], 00:16:50.081 | 30.00th=[ 449], 40.00th=[ 482], 50.00th=[ 519], 60.00th=[ 545], 00:16:50.081 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 709], 00:16:50.081 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 824], 99.95th=[ 824], 00:16:50.081 | 99.99th=[ 824] 00:16:50.081 bw ( KiB/s): min= 4096, max= 4096, per=51.40%, avg=4096.00, stdev= 0.00, samples=1 00:16:50.081 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:50.081 lat (usec) : 250=0.36%, 500=41.09%, 750=50.55%, 1000=4.91% 00:16:50.081 lat (msec) : 50=3.09% 00:16:50.081 cpu : usr=1.08%, sys=1.18%, ctx=552, majf=0, minf=1 00:16:50.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.081 issued rwts: total=38,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.081 00:16:50.081 Run status group 0 (all jobs): 00:16:50.081 READ: bw=2128KiB/s (2179kB/s), 70.0KiB/s-1890KiB/s (71.7kB/s-1935kB/s), io=2188KiB (2241kB), run=1001-1028msec 00:16:50.081 WRITE: bw=7969KiB/s (8160kB/s), 1992KiB/s-2046KiB/s (2040kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1028msec 00:16:50.081 00:16:50.081 Disk stats (read/write): 00:16:50.081 nvme0n1: ios=63/512, merge=0/0, ticks=579/272, in_queue=851, util=86.37% 00:16:50.081 nvme0n2: ios=48/512, merge=0/0, ticks=1220/242, in_queue=1462, util=99.80% 00:16:50.081 nvme0n3: ios=356/512, merge=0/0, ticks=703/342, in_queue=1045, util=92.09% 00:16:50.081 nvme0n4: ios=75/512, merge=0/0, ticks=954/254, in_queue=1208, util=97.76% 00:16:50.081 10:42:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:50.081 [global] 00:16:50.081 thread=1 00:16:50.081 invalidate=1 00:16:50.081 rw=write 00:16:50.081 time_based=1 00:16:50.081 runtime=1 00:16:50.081 ioengine=libaio 00:16:50.081 direct=1 00:16:50.081 bs=4096 00:16:50.081 iodepth=128 00:16:50.081 norandommap=0 00:16:50.081 numjobs=1 00:16:50.081 00:16:50.081 verify_dump=1 00:16:50.081 verify_backlog=512 00:16:50.081 verify_state_save=0 00:16:50.081 do_verify=1 00:16:50.081 verify=crc32c-intel 00:16:50.081 [job0] 00:16:50.081 filename=/dev/nvme0n1 00:16:50.081 [job1] 00:16:50.081 filename=/dev/nvme0n2 00:16:50.081 [job2] 00:16:50.081 filename=/dev/nvme0n3 00:16:50.081 [job3] 00:16:50.081 filename=/dev/nvme0n4 00:16:50.081 Could not set queue depth (nvme0n1) 00:16:50.081 Could not set queue depth (nvme0n2) 00:16:50.081 Could not set queue depth (nvme0n3) 00:16:50.081 Could not set queue depth (nvme0n4) 00:16:50.353 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.353 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.353 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.353 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:50.353 fio-3.35 00:16:50.353 Starting 4 threads 00:16:51.735 00:16:51.736 job0: (groupid=0, jobs=1): err= 0: pid=816638: Mon Jun 10 10:42:15 2024 00:16:51.736 read: IOPS=8102, BW=31.7MiB/s (33.2MB/s)(32.0MiB/1011msec) 00:16:51.736 slat (nsec): min=948, max=8908.5k, avg=58901.58, stdev=444656.60 00:16:51.736 clat (usec): min=2552, max=27565, avg=8000.46, stdev=3357.98 00:16:51.736 lat (usec): min=2802, max=27570, avg=8059.36, stdev=3388.05 00:16:51.736 clat percentiles (usec): 00:16:51.736 | 1.00th=[ 3982], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 5669], 00:16:51.736 | 30.00th=[ 5932], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7767], 00:16:51.736 | 70.00th=[ 8455], 80.00th=[10028], 90.00th=[12518], 95.00th=[14746], 00:16:51.736 | 99.00th=[20055], 99.50th=[25560], 99.90th=[27657], 99.95th=[27657], 00:16:51.736 | 99.99th=[27657] 00:16:51.736 write: IOPS=8369, BW=32.7MiB/s (34.3MB/s)(33.1MiB/1011msec); 0 zone resets 00:16:51.736 slat (nsec): min=1543, max=7084.1k, avg=56500.25, stdev=370282.29 00:16:51.736 clat (usec): min=1145, max=50366, avg=7391.10, stdev=5416.35 00:16:51.736 lat (usec): min=1157, max=50376, avg=7447.60, stdev=5444.67 00:16:51.736 clat percentiles (usec): 00:16:51.736 | 1.00th=[ 2376], 5.00th=[ 3720], 10.00th=[ 4228], 20.00th=[ 4817], 00:16:51.736 | 30.00th=[ 5342], 40.00th=[ 5538], 50.00th=[ 5800], 60.00th=[ 6259], 00:16:51.736 | 70.00th=[ 6849], 80.00th=[ 8586], 90.00th=[11600], 95.00th=[14615], 00:16:51.736 | 99.00th=[35390], 99.50th=[44303], 99.90th=[49546], 99.95th=[50594], 00:16:51.736 | 99.99th=[50594] 00:16:51.736 bw ( KiB/s): min=29760, max=36920, per=36.24%, avg=33340.00, stdev=5062.88, samples=2 00:16:51.736 iops : min= 7440, max= 9230, avg=8335.00, stdev=1265.72, samples=2 00:16:51.736 lat (msec) : 2=0.14%, 4=4.77%, 10=77.54%, 20=15.62%, 50=1.89% 00:16:51.736 lat (msec) : 100=0.04% 00:16:51.736 cpu : usr=5.84%, sys=6.73%, ctx=484, majf=0, minf=1 00:16:51.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:51.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.736 issued rwts: total=8192,8462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.736 job1: (groupid=0, jobs=1): err= 0: pid=816641: Mon Jun 10 10:42:15 2024 00:16:51.736 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:16:51.736 slat (nsec): min=844, max=14580k, avg=103728.18, stdev=731325.11 00:16:51.736 clat (usec): min=851, max=56860, avg=12095.49, stdev=7751.36 00:16:51.736 lat (usec): min=859, max=56867, avg=12199.22, stdev=7825.52 00:16:51.736 clat percentiles (usec): 00:16:51.736 | 1.00th=[ 1860], 5.00th=[ 4228], 10.00th=[ 6128], 20.00th=[ 6849], 00:16:51.736 | 30.00th=[ 7373], 40.00th=[ 8586], 50.00th=[10945], 60.00th=[11994], 00:16:51.736 | 70.00th=[13435], 80.00th=[15139], 90.00th=[19268], 95.00th=[27919], 00:16:51.736 | 99.00th=[46924], 99.50th=[50070], 99.90th=[56886], 99.95th=[56886], 00:16:51.736 | 99.99th=[56886] 00:16:51.736 write: IOPS=4745, BW=18.5MiB/s (19.4MB/s)(18.7MiB/1011msec); 0 zone resets 00:16:51.736 slat (nsec): min=1543, max=8750.6k, avg=94947.27, stdev=499532.95 00:16:51.736 clat (usec): min=806, max=56828, avg=15122.79, stdev=11911.41 00:16:51.736 lat (usec): min=819, max=56833, avg=15217.73, stdev=11989.22 00:16:51.736 clat percentiles (usec): 00:16:51.736 | 1.00th=[ 1336], 5.00th=[ 3130], 10.00th=[ 3654], 20.00th=[ 5997], 00:16:51.736 | 30.00th=[ 8291], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11469], 00:16:51.736 | 70.00th=[15795], 80.00th=[26346], 90.00th=[35914], 95.00th=[41157], 00:16:51.736 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46400], 99.95th=[50070], 00:16:51.736 | 99.99th=[56886] 00:16:51.736 bw ( KiB/s): min=13640, max=23728, per=20.31%, avg=18684.00, stdev=7133.29, samples=2 00:16:51.736 iops : min= 3410, max= 5932, avg=4671.00, stdev=1783.32, samples=2 00:16:51.736 lat (usec) : 1000=0.16% 00:16:51.736 lat (msec) : 2=1.84%, 4=5.60%, 10=38.00%, 20=36.79%, 50=17.31% 00:16:51.736 lat (msec) : 100=0.31% 00:16:51.736 cpu : usr=2.97%, sys=4.85%, ctx=532, majf=0, minf=1 00:16:51.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:51.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.736 issued rwts: total=4608,4798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.736 job2: (groupid=0, jobs=1): err= 0: pid=816643: Mon Jun 10 10:42:15 2024 00:16:51.736 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:16:51.736 slat (nsec): min=889, max=14356k, avg=124224.20, stdev=853646.87 00:16:51.736 clat (usec): min=6294, max=62770, avg=15621.37, stdev=9004.78 00:16:51.736 lat (usec): min=6304, max=62795, avg=15745.59, stdev=9081.15 00:16:51.736 clat percentiles (usec): 00:16:51.736 | 1.00th=[ 7046], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10552], 00:16:51.736 | 30.00th=[11338], 40.00th=[11469], 50.00th=[12125], 60.00th=[12780], 00:16:51.736 | 70.00th=[15008], 80.00th=[19006], 90.00th=[25297], 95.00th=[39060], 00:16:51.736 | 99.00th=[50594], 99.50th=[58983], 99.90th=[61080], 99.95th=[61080], 00:16:51.736 | 99.99th=[62653] 00:16:51.736 write: IOPS=4567, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1004msec); 0 zone resets 00:16:51.736 slat (nsec): min=1542, max=14335k, avg=100486.79, stdev=669916.43 00:16:51.736 clat (usec): min=3343, max=46164, avg=13775.19, stdev=6376.97 00:16:51.736 lat (usec): min=3711, max=46210, avg=13875.67, stdev=6431.62 00:16:51.736 clat percentiles (usec): 00:16:51.736 | 1.00th=[ 4228], 5.00th=[ 6980], 10.00th=[ 8291], 20.00th=[ 8717], 00:16:51.736 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11863], 60.00th=[13566], 00:16:51.736 | 70.00th=[15795], 80.00th=[18744], 90.00th=[23200], 95.00th=[26084], 00:16:51.736 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:16:51.736 | 99.99th=[46400] 00:16:51.736 bw ( KiB/s): min=16376, max=19296, per=19.39%, avg=17836.00, stdev=2064.75, samples=2 00:16:51.736 iops : min= 4094, max= 4824, avg=4459.00, stdev=516.19, samples=2 00:16:51.736 lat (msec) : 4=0.32%, 10=24.06%, 20=61.36%, 50=13.50%, 100=0.76% 00:16:51.736 cpu : usr=2.79%, sys=5.28%, ctx=309, majf=0, minf=1 00:16:51.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:51.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.736 issued rwts: total=4096,4586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.736 job3: (groupid=0, jobs=1): err= 0: pid=816649: Mon Jun 10 10:42:15 2024 00:16:51.736 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:16:51.736 slat (nsec): min=915, max=9044.2k, avg=72165.97, stdev=513372.00 00:16:51.736 clat (usec): min=3516, max=32114, avg=10271.30, stdev=4107.66 00:16:51.736 lat (usec): min=3519, max=32121, avg=10343.47, stdev=4144.76 00:16:51.736 clat percentiles (usec): 00:16:51.736 | 1.00th=[ 4686], 5.00th=[ 6521], 10.00th=[ 7177], 20.00th=[ 7439], 00:16:51.736 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9765], 00:16:51.736 | 70.00th=[11338], 80.00th=[12649], 90.00th=[14353], 95.00th=[18482], 00:16:51.736 | 99.00th=[26608], 99.50th=[29230], 99.90th=[31327], 99.95th=[32113], 00:16:51.736 | 99.99th=[32113] 00:16:51.736 write: IOPS=5352, BW=20.9MiB/s (21.9MB/s)(21.1MiB/1010msec); 0 zone resets 00:16:51.736 slat (nsec): min=1585, max=7753.4k, avg=89431.18, stdev=507839.96 00:16:51.736 clat (usec): min=1274, max=51314, avg=13987.14, stdev=10051.02 00:16:51.736 lat (usec): min=1286, max=51317, avg=14076.57, stdev=10112.07 00:16:51.736 clat percentiles (usec): 00:16:51.736 | 1.00th=[ 2376], 5.00th=[ 3916], 10.00th=[ 4621], 20.00th=[ 5997], 00:16:51.736 | 30.00th=[ 6718], 40.00th=[ 7701], 50.00th=[ 9372], 60.00th=[13173], 00:16:51.736 | 70.00th=[17695], 80.00th=[22676], 90.00th=[30540], 95.00th=[33817], 00:16:51.736 | 99.00th=[41681], 99.50th=[44303], 99.90th=[46924], 99.95th=[47449], 00:16:51.736 | 99.99th=[51119] 00:16:51.736 bw ( KiB/s): min=16384, max=25840, per=22.95%, avg=21112.00, stdev=6686.40, samples=2 00:16:51.736 iops : min= 4096, max= 6460, avg=5278.00, stdev=1671.60, samples=2 00:16:51.736 lat (msec) : 2=0.35%, 4=2.44%, 10=54.25%, 20=28.11%, 50=14.84% 00:16:51.736 lat (msec) : 100=0.01% 00:16:51.736 cpu : usr=3.96%, sys=5.85%, ctx=456, majf=0, minf=1 00:16:51.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:51.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:51.736 issued rwts: total=5120,5406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:51.736 00:16:51.736 Run status group 0 (all jobs): 00:16:51.736 READ: bw=85.1MiB/s (89.2MB/s), 15.9MiB/s-31.7MiB/s (16.7MB/s-33.2MB/s), io=86.0MiB (90.2MB), run=1004-1011msec 00:16:51.736 WRITE: bw=89.8MiB/s (94.2MB/s), 17.8MiB/s-32.7MiB/s (18.7MB/s-34.3MB/s), io=90.8MiB (95.2MB), run=1004-1011msec 00:16:51.736 00:16:51.736 Disk stats (read/write): 00:16:51.736 nvme0n1: ios=7217/7466, merge=0/0, ticks=50440/51835, in_queue=102275, util=97.19% 00:16:51.736 nvme0n2: ios=3619/4039, merge=0/0, ticks=37622/59968, in_queue=97590, util=87.36% 00:16:51.736 nvme0n3: ios=3259/3584, merge=0/0, ticks=26506/23382, in_queue=49888, util=88.41% 00:16:51.736 nvme0n4: ios=3965/4096, merge=0/0, ticks=37973/58801, in_queue=96774, util=89.54% 00:16:51.736 10:42:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:51.736 [global] 00:16:51.736 thread=1 00:16:51.736 invalidate=1 00:16:51.736 rw=randwrite 00:16:51.736 time_based=1 00:16:51.736 runtime=1 00:16:51.736 ioengine=libaio 00:16:51.736 direct=1 00:16:51.736 bs=4096 00:16:51.736 iodepth=128 00:16:51.736 norandommap=0 00:16:51.736 numjobs=1 00:16:51.736 00:16:51.736 verify_dump=1 00:16:51.736 verify_backlog=512 00:16:51.736 verify_state_save=0 00:16:51.736 do_verify=1 00:16:51.736 verify=crc32c-intel 00:16:51.736 [job0] 00:16:51.736 filename=/dev/nvme0n1 00:16:51.736 [job1] 00:16:51.736 filename=/dev/nvme0n2 00:16:51.736 [job2] 00:16:51.736 filename=/dev/nvme0n3 00:16:51.736 [job3] 00:16:51.736 filename=/dev/nvme0n4 00:16:51.736 Could not set queue depth (nvme0n1) 00:16:51.736 Could not set queue depth (nvme0n2) 00:16:51.736 Could not set queue depth (nvme0n3) 00:16:51.736 Could not set queue depth (nvme0n4) 00:16:51.997 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.997 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.997 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.997 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.997 fio-3.35 00:16:51.997 Starting 4 threads 00:16:53.411 00:16:53.411 job0: (groupid=0, jobs=1): err= 0: pid=817144: Mon Jun 10 10:42:17 2024 00:16:53.411 read: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:16:53.411 slat (nsec): min=980, max=8637.5k, avg=70967.51, stdev=507389.66 00:16:53.411 clat (usec): min=3394, max=40074, avg=9141.51, stdev=3843.76 00:16:53.411 lat (usec): min=3399, max=40081, avg=9212.47, stdev=3880.78 00:16:53.411 clat percentiles (usec): 00:16:53.411 | 1.00th=[ 5080], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 7177], 00:16:53.411 | 30.00th=[ 7570], 40.00th=[ 7832], 50.00th=[ 8160], 60.00th=[ 8586], 00:16:53.411 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[12387], 95.00th=[15139], 00:16:53.411 | 99.00th=[28967], 99.50th=[36439], 99.90th=[39060], 99.95th=[40109], 00:16:53.411 | 99.99th=[40109] 00:16:53.411 write: IOPS=7535, BW=29.4MiB/s (30.9MB/s)(29.6MiB/1005msec); 0 zone resets 00:16:53.411 slat (nsec): min=1571, max=9054.9k, avg=60357.28, stdev=361255.18 00:16:53.411 clat (usec): min=1128, max=40047, avg=8166.65, stdev=3169.68 00:16:53.411 lat (usec): min=1137, max=40049, avg=8227.01, stdev=3182.10 00:16:53.411 clat percentiles (usec): 00:16:53.411 | 1.00th=[ 3392], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 6063], 00:16:53.411 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7767], 00:16:53.411 | 70.00th=[ 8291], 80.00th=[ 9503], 90.00th=[12387], 95.00th=[15008], 00:16:53.411 | 99.00th=[16909], 99.50th=[21627], 99.90th=[32375], 99.95th=[32375], 00:16:53.411 | 99.99th=[40109] 00:16:53.411 bw ( KiB/s): min=26800, max=32768, per=27.34%, avg=29784.00, stdev=4220.01, samples=2 00:16:53.411 iops : min= 6700, max= 8192, avg=7446.00, stdev=1055.00, samples=2 00:16:53.411 lat (msec) : 2=0.01%, 4=1.07%, 10=80.38%, 20=17.18%, 50=1.36% 00:16:53.411 cpu : usr=4.88%, sys=6.57%, ctx=649, majf=0, minf=1 00:16:53.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:53.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.411 issued rwts: total=7168,7573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.411 job1: (groupid=0, jobs=1): err= 0: pid=817145: Mon Jun 10 10:42:17 2024 00:16:53.411 read: IOPS=8147, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:16:53.411 slat (nsec): min=953, max=11397k, avg=63165.29, stdev=449056.62 00:16:53.411 clat (usec): min=1819, max=24496, avg=8299.72, stdev=1981.54 00:16:53.411 lat (usec): min=2820, max=24526, avg=8362.89, stdev=2005.59 00:16:53.411 clat percentiles (usec): 00:16:53.411 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6915], 00:16:53.411 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8160], 00:16:53.411 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[10814], 95.00th=[12518], 00:16:53.411 | 99.00th=[13829], 99.50th=[15008], 99.90th=[17433], 99.95th=[21365], 00:16:53.411 | 99.99th=[24511] 00:16:53.411 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec); 0 zone resets 00:16:53.411 slat (nsec): min=1584, max=11285k, avg=54156.33, stdev=348488.51 00:16:53.411 clat (usec): min=1124, max=23488, avg=7238.67, stdev=2706.98 00:16:53.411 lat (usec): min=1133, max=23492, avg=7292.83, stdev=2714.29 00:16:53.411 clat percentiles (usec): 00:16:53.411 | 1.00th=[ 2606], 5.00th=[ 3982], 10.00th=[ 4817], 20.00th=[ 5604], 00:16:53.411 | 30.00th=[ 6194], 40.00th=[ 6587], 50.00th=[ 6980], 60.00th=[ 7177], 00:16:53.411 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 9503], 95.00th=[11469], 00:16:53.411 | 99.00th=[19268], 99.50th=[20579], 99.90th=[22676], 99.95th=[22676], 00:16:53.411 | 99.99th=[23462] 00:16:53.411 bw ( KiB/s): min=31904, max=33632, per=30.08%, avg=32768.00, stdev=1221.88, samples=2 00:16:53.411 iops : min= 7976, max= 8408, avg=8192.00, stdev=305.47, samples=2 00:16:53.411 lat (msec) : 2=0.09%, 4=2.53%, 10=84.63%, 20=12.34%, 50=0.40% 00:16:53.411 cpu : usr=5.88%, sys=7.58%, ctx=699, majf=0, minf=1 00:16:53.411 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:53.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.411 issued rwts: total=8180,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.411 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.411 job2: (groupid=0, jobs=1): err= 0: pid=817150: Mon Jun 10 10:42:17 2024 00:16:53.411 read: IOPS=4903, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1004msec) 00:16:53.411 slat (nsec): min=897, max=14486k, avg=86644.56, stdev=672359.28 00:16:53.411 clat (usec): min=1071, max=37250, avg=12147.39, stdev=4588.35 00:16:53.411 lat (usec): min=4980, max=37274, avg=12234.04, stdev=4624.91 00:16:53.411 clat percentiles (usec): 00:16:53.411 | 1.00th=[ 5407], 5.00th=[ 7504], 10.00th=[ 8848], 20.00th=[ 9503], 00:16:53.411 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10683], 00:16:53.411 | 70.00th=[12518], 80.00th=[15270], 90.00th=[17957], 95.00th=[22676], 00:16:53.411 | 99.00th=[27657], 99.50th=[29230], 99.90th=[29492], 99.95th=[30016], 00:16:53.411 | 99.99th=[37487] 00:16:53.411 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:16:53.411 slat (nsec): min=1575, max=15175k, avg=89550.28, stdev=587613.59 00:16:53.411 clat (usec): min=1271, max=66282, avg=13159.74, stdev=9119.37 00:16:53.411 lat (usec): min=1281, max=66288, avg=13249.29, stdev=9174.77 00:16:53.411 clat percentiles (usec): 00:16:53.411 | 1.00th=[ 3064], 5.00th=[ 5932], 10.00th=[ 8160], 20.00th=[ 9110], 00:16:53.411 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:16:53.411 | 70.00th=[12256], 80.00th=[15139], 90.00th=[20055], 95.00th=[30540], 00:16:53.411 | 99.00th=[59507], 99.50th=[63701], 99.90th=[66323], 99.95th=[66323], 00:16:53.411 | 99.99th=[66323] 00:16:53.411 bw ( KiB/s): min=20480, max=20480, per=18.80%, avg=20480.00, stdev= 0.00, samples=2 00:16:53.411 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:53.412 lat (msec) : 2=0.35%, 4=0.56%, 10=42.48%, 20=47.66%, 50=7.95% 00:16:53.412 lat (msec) : 100=1.02% 00:16:53.412 cpu : usr=3.69%, sys=5.18%, ctx=483, majf=0, minf=1 00:16:53.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:53.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.412 issued rwts: total=4923,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.412 job3: (groupid=0, jobs=1): err= 0: pid=817151: Mon Jun 10 10:42:17 2024 00:16:53.412 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:16:53.412 slat (nsec): min=1005, max=11991k, avg=87762.56, stdev=680504.68 00:16:53.412 clat (usec): min=2572, max=40449, avg=11122.51, stdev=4335.32 00:16:53.412 lat (usec): min=2582, max=40454, avg=11210.27, stdev=4372.52 00:16:53.412 clat percentiles (usec): 00:16:53.412 | 1.00th=[ 3163], 5.00th=[ 7308], 10.00th=[ 7504], 20.00th=[ 7635], 00:16:53.412 | 30.00th=[ 7898], 40.00th=[ 8848], 50.00th=[10945], 60.00th=[11731], 00:16:53.412 | 70.00th=[12387], 80.00th=[13698], 90.00th=[16450], 95.00th=[19006], 00:16:53.412 | 99.00th=[24773], 99.50th=[27657], 99.90th=[34341], 99.95th=[34341], 00:16:53.412 | 99.99th=[40633] 00:16:53.412 write: IOPS=6516, BW=25.5MiB/s (26.7MB/s)(25.7MiB/1008msec); 0 zone resets 00:16:53.412 slat (nsec): min=1650, max=10628k, avg=61372.09, stdev=398095.72 00:16:53.412 clat (usec): min=921, max=21950, avg=9049.29, stdev=3121.96 00:16:53.412 lat (usec): min=925, max=21952, avg=9110.66, stdev=3144.18 00:16:53.412 clat percentiles (usec): 00:16:53.412 | 1.00th=[ 2147], 5.00th=[ 4047], 10.00th=[ 5342], 20.00th=[ 7308], 00:16:53.412 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8848], 00:16:53.412 | 70.00th=[10683], 80.00th=[12125], 90.00th=[12256], 95.00th=[14353], 00:16:53.412 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19268], 99.95th=[21627], 00:16:53.412 | 99.99th=[21890] 00:16:53.412 bw ( KiB/s): min=24576, max=26960, per=23.65%, avg=25768.00, stdev=1685.74, samples=2 00:16:53.412 iops : min= 6144, max= 6740, avg=6442.00, stdev=421.44, samples=2 00:16:53.412 lat (usec) : 1000=0.06% 00:16:53.412 lat (msec) : 2=0.20%, 4=2.99%, 10=53.80%, 20=41.00%, 50=1.95% 00:16:53.412 cpu : usr=3.38%, sys=5.66%, ctx=696, majf=0, minf=1 00:16:53.412 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:53.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.412 issued rwts: total=6144,6569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.412 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.412 00:16:53.412 Run status group 0 (all jobs): 00:16:53.412 READ: bw=102MiB/s (107MB/s), 19.2MiB/s-31.8MiB/s (20.1MB/s-33.4MB/s), io=103MiB (108MB), run=1004-1008msec 00:16:53.412 WRITE: bw=106MiB/s (112MB/s), 19.9MiB/s-31.9MiB/s (20.9MB/s-33.4MB/s), io=107MiB (112MB), run=1004-1008msec 00:16:53.412 00:16:53.412 Disk stats (read/write): 00:16:53.412 nvme0n1: ios=5955/6144, merge=0/0, ticks=51364/46231, in_queue=97595, util=96.29% 00:16:53.412 nvme0n2: ios=6699/6942, merge=0/0, ticks=53302/48623, in_queue=101925, util=98.37% 00:16:53.412 nvme0n3: ios=3710/4096, merge=0/0, ticks=38672/48107, in_queue=86779, util=88.41% 00:16:53.412 nvme0n4: ios=5223/5632, merge=0/0, ticks=55527/47814, in_queue=103341, util=100.00% 00:16:53.412 10:42:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:53.412 10:42:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=817481 00:16:53.412 10:42:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:53.412 10:42:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:53.412 [global] 00:16:53.412 thread=1 00:16:53.412 invalidate=1 00:16:53.412 rw=read 00:16:53.412 time_based=1 00:16:53.412 runtime=10 00:16:53.412 ioengine=libaio 00:16:53.412 direct=1 00:16:53.412 bs=4096 00:16:53.412 iodepth=1 00:16:53.412 norandommap=1 00:16:53.412 numjobs=1 00:16:53.412 00:16:53.412 [job0] 00:16:53.412 filename=/dev/nvme0n1 00:16:53.412 [job1] 00:16:53.412 filename=/dev/nvme0n2 00:16:53.412 [job2] 00:16:53.412 filename=/dev/nvme0n3 00:16:53.412 [job3] 00:16:53.412 filename=/dev/nvme0n4 00:16:53.412 Could not set queue depth (nvme0n1) 00:16:53.412 Could not set queue depth (nvme0n2) 00:16:53.412 Could not set queue depth (nvme0n3) 00:16:53.412 Could not set queue depth (nvme0n4) 00:16:53.676 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:53.676 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:53.676 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:53.676 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:53.676 fio-3.35 00:16:53.676 Starting 4 threads 00:16:56.223 10:42:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:56.223 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=7872512, buflen=4096 00:16:56.223 fio: pid=817677, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:56.223 10:42:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:56.502 10:42:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:56.502 10:42:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:56.502 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=7135232, buflen=4096 00:16:56.502 fio: pid=817673, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:56.817 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=13561856, buflen=4096 00:16:56.817 fio: pid=817667, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:56.817 10:42:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:56.817 10:42:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:56.817 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=12513280, buflen=4096 00:16:56.817 fio: pid=817669, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:56.817 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:56.817 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:56.817 00:16:56.817 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=817667: Mon Jun 10 10:42:21 2024 00:16:56.817 read: IOPS=1166, BW=4663KiB/s (4775kB/s)(12.9MiB/2840msec) 00:16:56.817 slat (usec): min=6, max=12606, avg=35.45, stdev=360.29 00:16:56.817 clat (usec): min=196, max=41888, avg=816.41, stdev=1037.22 00:16:56.817 lat (usec): min=205, max=41913, avg=851.86, stdev=1099.43 00:16:56.817 clat percentiles (usec): 00:16:56.817 | 1.00th=[ 412], 5.00th=[ 494], 10.00th=[ 545], 20.00th=[ 603], 00:16:56.817 | 30.00th=[ 685], 40.00th=[ 816], 50.00th=[ 848], 60.00th=[ 873], 00:16:56.817 | 70.00th=[ 898], 80.00th=[ 922], 90.00th=[ 955], 95.00th=[ 988], 00:16:56.817 | 99.00th=[ 1057], 99.50th=[ 1123], 99.90th=[ 1500], 99.95th=[41681], 00:16:56.817 | 99.99th=[41681] 00:16:56.817 bw ( KiB/s): min= 4040, max= 5688, per=35.80%, avg=4769.60, stdev=842.54, samples=5 00:16:56.817 iops : min= 1010, max= 1422, avg=1192.40, stdev=210.63, samples=5 00:16:56.817 lat (usec) : 250=0.15%, 500=5.40%, 750=27.36%, 1000=63.68% 00:16:56.817 lat (msec) : 2=3.29%, 20=0.03%, 50=0.06% 00:16:56.817 cpu : usr=1.34%, sys=3.06%, ctx=3316, majf=0, minf=1 00:16:56.817 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.817 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.817 issued rwts: total=3312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.817 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.817 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=817669: Mon Jun 10 10:42:21 2024 00:16:56.817 read: IOPS=1014, BW=4057KiB/s (4154kB/s)(11.9MiB/3012msec) 00:16:56.817 slat (usec): min=6, max=17885, avg=41.17, stdev=473.26 00:16:56.817 clat (usec): min=338, max=42232, avg=938.21, stdev=1140.94 00:16:56.817 lat (usec): min=351, max=42257, avg=979.38, stdev=1235.68 00:16:56.817 clat percentiles (usec): 00:16:56.817 | 1.00th=[ 611], 5.00th=[ 742], 10.00th=[ 791], 20.00th=[ 840], 00:16:56.817 | 30.00th=[ 865], 40.00th=[ 889], 50.00th=[ 906], 60.00th=[ 930], 00:16:56.817 | 70.00th=[ 955], 80.00th=[ 979], 90.00th=[ 1012], 95.00th=[ 1045], 00:16:56.817 | 99.00th=[ 1156], 99.50th=[ 1237], 99.90th=[ 1729], 99.95th=[41681], 00:16:56.817 | 99.99th=[42206] 00:16:56.817 bw ( KiB/s): min= 3432, max= 4632, per=30.95%, avg=4123.20, stdev=458.37, samples=5 00:16:56.817 iops : min= 858, max= 1158, avg=1030.80, stdev=114.59, samples=5 00:16:56.817 lat (usec) : 500=0.52%, 750=4.94%, 1000=80.92% 00:16:56.817 lat (msec) : 2=13.48%, 50=0.10% 00:16:56.817 cpu : usr=0.93%, sys=3.12%, ctx=3061, majf=0, minf=1 00:16:56.817 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.817 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.817 issued rwts: total=3056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.817 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.817 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=817673: Mon Jun 10 10:42:21 2024 00:16:56.817 read: IOPS=642, BW=2569KiB/s (2631kB/s)(6968KiB/2712msec) 00:16:56.817 slat (usec): min=6, max=247, avg=26.38, stdev= 6.42 00:16:56.817 clat (usec): min=340, max=42700, avg=1523.43, stdev=3661.15 00:16:56.817 lat (usec): min=348, max=42726, avg=1549.81, stdev=3662.51 00:16:56.817 clat percentiles (usec): 00:16:56.817 | 1.00th=[ 766], 5.00th=[ 881], 10.00th=[ 938], 20.00th=[ 1074], 00:16:56.817 | 30.00th=[ 1139], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1237], 00:16:56.817 | 70.00th=[ 1270], 80.00th=[ 1303], 90.00th=[ 1352], 95.00th=[ 1401], 00:16:56.817 | 99.00th=[ 1663], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:16:56.817 | 99.99th=[42730] 00:16:56.817 bw ( KiB/s): min= 1232, max= 3184, per=20.84%, avg=2776.00, stdev=863.24, samples=5 00:16:56.817 iops : min= 308, max= 796, avg=694.00, stdev=215.81, samples=5 00:16:56.818 lat (usec) : 500=0.23%, 750=0.63%, 1000=13.60% 00:16:56.818 lat (msec) : 2=84.62%, 50=0.86% 00:16:56.818 cpu : usr=0.89%, sys=1.81%, ctx=1745, majf=0, minf=1 00:16:56.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.818 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.818 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.818 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=817677: Mon Jun 10 10:42:21 2024 00:16:56.818 read: IOPS=760, BW=3039KiB/s (3112kB/s)(7688KiB/2530msec) 00:16:56.818 slat (nsec): min=7175, max=60672, avg=25229.56, stdev=3252.94 00:16:56.818 clat (usec): min=616, max=42130, avg=1284.01, stdev=2821.03 00:16:56.818 lat (usec): min=641, max=42139, avg=1309.24, stdev=2820.88 00:16:56.818 clat percentiles (usec): 00:16:56.818 | 1.00th=[ 766], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 988], 00:16:56.818 | 30.00th=[ 1029], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:16:56.818 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1270], 00:16:56.818 | 99.00th=[ 1483], 99.50th=[26608], 99.90th=[41681], 99.95th=[42206], 00:16:56.818 | 99.99th=[42206] 00:16:56.818 bw ( KiB/s): min= 1632, max= 3608, per=23.07%, avg=3073.60, stdev=834.41, samples=5 00:16:56.818 iops : min= 408, max= 902, avg=768.40, stdev=208.60, samples=5 00:16:56.818 lat (usec) : 750=0.73%, 1000=22.57% 00:16:56.818 lat (msec) : 2=76.13%, 50=0.52% 00:16:56.818 cpu : usr=0.79%, sys=2.29%, ctx=1923, majf=0, minf=2 00:16:56.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.818 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.818 issued rwts: total=1923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.818 00:16:56.818 Run status group 0 (all jobs): 00:16:56.818 READ: bw=13.0MiB/s (13.6MB/s), 2569KiB/s-4663KiB/s (2631kB/s-4775kB/s), io=39.2MiB (41.1MB), run=2530-3012msec 00:16:56.818 00:16:56.818 Disk stats (read/write): 00:16:56.818 nvme0n1: ios=3231/0, merge=0/0, ticks=2558/0, in_queue=2558, util=91.12% 00:16:56.818 nvme0n2: ios=2793/0, merge=0/0, ticks=2472/0, in_queue=2472, util=91.96% 00:16:56.818 nvme0n3: ios=1776/0, merge=0/0, ticks=2607/0, in_queue=2607, util=100.00% 00:16:56.818 nvme0n4: ios=1921/0, merge=0/0, ticks=2380/0, in_queue=2380, util=96.33% 00:16:57.114 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:57.114 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:57.114 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:57.114 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:57.375 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:57.375 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:57.636 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:57.636 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:57.636 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:57.636 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 817481 00:16:57.636 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:57.636 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:57.897 nvmf hotplug test: fio failed as expected 00:16:57.897 10:42:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.897 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.897 rmmod nvme_tcp 00:16:57.897 rmmod nvme_fabrics 00:16:57.897 rmmod nvme_keyring 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 813528 ']' 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 813528 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 813528 ']' 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 813528 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 813528 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 813528' 00:16:58.156 killing process with pid 813528 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 813528 00:16:58.156 [2024-06-10 10:42:22.259065] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 813528 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.156 10:42:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.703 10:42:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.703 00:17:00.703 real 0m28.513s 00:17:00.703 user 2m28.761s 00:17:00.703 sys 0m9.408s 00:17:00.703 10:42:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:00.703 10:42:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.703 ************************************ 00:17:00.703 END TEST nvmf_fio_target 00:17:00.703 ************************************ 00:17:00.703 10:42:24 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:00.703 10:42:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:00.703 10:42:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:00.703 10:42:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.703 ************************************ 00:17:00.703 START TEST nvmf_bdevio 00:17:00.703 ************************************ 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:00.703 * Looking for test storage... 00:17:00.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.703 10:42:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:07.298 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:07.559 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:07.559 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:07.559 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:07.560 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:07.560 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:07.560 Found net devices under 0000:31:00.0: cvl_0_0 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:07.560 Found net devices under 0000:31:00.1: cvl_0_1 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:07.560 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:07.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:17:07.823 00:17:07.823 --- 10.0.0.2 ping statistics --- 00:17:07.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.823 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:07.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:17:07.823 00:17:07.823 --- 10.0.0.1 ping statistics --- 00:17:07.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.823 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=822776 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 822776 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 822776 ']' 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:07.823 10:42:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.823 [2024-06-10 10:42:32.020648] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:17:07.823 [2024-06-10 10:42:32.020733] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.823 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.085 [2024-06-10 10:42:32.110572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.085 [2024-06-10 10:42:32.202797] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.085 [2024-06-10 10:42:32.202855] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.085 [2024-06-10 10:42:32.202863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.085 [2024-06-10 10:42:32.202870] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.085 [2024-06-10 10:42:32.202882] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.085 [2024-06-10 10:42:32.203052] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:17:08.085 [2024-06-10 10:42:32.203211] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:17:08.085 [2024-06-10 10:42:32.203370] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:17:08.085 [2024-06-10 10:42:32.203371] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:08.658 [2024-06-10 10:42:32.856449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:08.658 Malloc0 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:08.658 [2024-06-10 10:42:32.921166] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:08.658 [2024-06-10 10:42:32.921482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:08.658 { 00:17:08.658 "params": { 00:17:08.658 "name": "Nvme$subsystem", 00:17:08.658 "trtype": "$TEST_TRANSPORT", 00:17:08.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.658 "adrfam": "ipv4", 00:17:08.658 "trsvcid": "$NVMF_PORT", 00:17:08.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.658 "hdgst": ${hdgst:-false}, 00:17:08.658 "ddgst": ${ddgst:-false} 00:17:08.658 }, 00:17:08.658 "method": "bdev_nvme_attach_controller" 00:17:08.658 } 00:17:08.658 EOF 00:17:08.658 )") 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:08.658 10:42:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:08.658 "params": { 00:17:08.658 "name": "Nvme1", 00:17:08.658 "trtype": "tcp", 00:17:08.658 "traddr": "10.0.0.2", 00:17:08.658 "adrfam": "ipv4", 00:17:08.658 "trsvcid": "4420", 00:17:08.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:08.658 "hdgst": false, 00:17:08.658 "ddgst": false 00:17:08.658 }, 00:17:08.658 "method": "bdev_nvme_attach_controller" 00:17:08.658 }' 00:17:08.931 [2024-06-10 10:42:32.987788] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:17:08.931 [2024-06-10 10:42:32.987864] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823104 ] 00:17:08.931 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.931 [2024-06-10 10:42:33.056046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.931 [2024-06-10 10:42:33.132470] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.931 [2024-06-10 10:42:33.132655] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.931 [2024-06-10 10:42:33.132659] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.193 I/O targets: 00:17:09.193 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:09.193 00:17:09.193 00:17:09.193 CUnit - A unit testing framework for C - Version 2.1-3 00:17:09.193 http://cunit.sourceforge.net/ 00:17:09.193 00:17:09.193 00:17:09.193 Suite: bdevio tests on: Nvme1n1 00:17:09.193 Test: blockdev write read block ...passed 00:17:09.193 Test: blockdev write zeroes read block ...passed 00:17:09.193 Test: blockdev write zeroes read no split ...passed 00:17:09.193 Test: blockdev write zeroes read split ...passed 00:17:09.455 Test: blockdev write zeroes read split partial ...passed 00:17:09.455 Test: blockdev reset ...[2024-06-10 10:42:33.487685] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:09.455 [2024-06-10 10:42:33.487754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4beb0 (9): Bad file descriptor 00:17:09.455 [2024-06-10 10:42:33.501493] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:09.455 passed 00:17:09.455 Test: blockdev write read 8 blocks ...passed 00:17:09.455 Test: blockdev write read size > 128k ...passed 00:17:09.455 Test: blockdev write read invalid size ...passed 00:17:09.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:09.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:09.455 Test: blockdev write read max offset ...passed 00:17:09.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:09.455 Test: blockdev writev readv 8 blocks ...passed 00:17:09.455 Test: blockdev writev readv 30 x 1block ...passed 00:17:09.717 Test: blockdev writev readv block ...passed 00:17:09.717 Test: blockdev writev readv size > 128k ...passed 00:17:09.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:09.717 Test: blockdev comparev and writev ...[2024-06-10 10:42:33.770572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.717 [2024-06-10 10:42:33.770595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.770606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.717 [2024-06-10 10:42:33.770611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.771153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.717 [2024-06-10 10:42:33.771164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.771178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.717 [2024-06-10 10:42:33.771185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.771669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.717 [2024-06-10 10:42:33.771679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.771688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.717 [2024-06-10 10:42:33.771695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.772183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.717 [2024-06-10 10:42:33.772192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.772201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:09.717 [2024-06-10 10:42:33.772207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:09.717 passed 00:17:09.717 Test: blockdev nvme passthru rw ...passed 00:17:09.717 Test: blockdev nvme passthru vendor specific ...[2024-06-10 10:42:33.857239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:09.717 [2024-06-10 10:42:33.857253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.857627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:09.717 [2024-06-10 10:42:33.857635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.858021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:09.717 [2024-06-10 10:42:33.858028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:09.717 [2024-06-10 10:42:33.858457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:09.717 [2024-06-10 10:42:33.858466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:09.717 passed 00:17:09.717 Test: blockdev nvme admin passthru ...passed 00:17:09.717 Test: blockdev copy ...passed 00:17:09.717 00:17:09.717 Run Summary: Type Total Ran Passed Failed Inactive 00:17:09.717 suites 1 1 n/a 0 0 00:17:09.717 tests 23 23 23 0 0 00:17:09.717 asserts 152 152 152 0 n/a 00:17:09.717 00:17:09.717 Elapsed time = 1.199 seconds 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.979 rmmod nvme_tcp 00:17:09.979 rmmod nvme_fabrics 00:17:09.979 rmmod nvme_keyring 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 822776 ']' 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 822776 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 822776 ']' 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 822776 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 822776 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 822776' 00:17:09.979 killing process with pid 822776 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 822776 00:17:09.979 [2024-06-10 10:42:34.176273] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:09.979 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 822776 00:17:10.241 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:10.241 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:10.241 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:10.241 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:10.241 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:10.241 10:42:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.241 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.241 10:42:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.181 10:42:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:12.181 00:17:12.181 real 0m11.883s 00:17:12.181 user 0m12.659s 00:17:12.181 sys 0m5.974s 00:17:12.181 10:42:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:12.181 10:42:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:12.181 ************************************ 00:17:12.181 END TEST nvmf_bdevio 00:17:12.181 ************************************ 00:17:12.181 10:42:36 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:12.181 10:42:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:12.181 10:42:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:12.181 10:42:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:12.442 ************************************ 00:17:12.442 START TEST nvmf_auth_target 00:17:12.442 ************************************ 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:12.442 * Looking for test storage... 00:17:12.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.442 10:42:36 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:12.443 10:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.591 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:20.592 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:20.592 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:20.592 Found net devices under 0000:31:00.0: cvl_0_0 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:20.592 Found net devices under 0000:31:00.1: cvl_0_1 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:20.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:17:20.592 00:17:20.592 --- 10.0.0.2 ping statistics --- 00:17:20.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.592 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:17:20.592 00:17:20.592 --- 10.0.0.1 ping statistics --- 00:17:20.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.592 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=827507 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 827507 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 827507 ']' 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:20.592 10:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=827528 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:20.592 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a2e2d5f8887244f74073e584c2a1a1ac40dec97bb065d72a 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DCQ 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a2e2d5f8887244f74073e584c2a1a1ac40dec97bb065d72a 0 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a2e2d5f8887244f74073e584c2a1a1ac40dec97bb065d72a 0 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a2e2d5f8887244f74073e584c2a1a1ac40dec97bb065d72a 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DCQ 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DCQ 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.DCQ 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed0e93c59c1a1a9f40109b894057267561f72074466e6a22033e7ab711011be2 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9ia 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed0e93c59c1a1a9f40109b894057267561f72074466e6a22033e7ab711011be2 3 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed0e93c59c1a1a9f40109b894057267561f72074466e6a22033e7ab711011be2 3 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed0e93c59c1a1a9f40109b894057267561f72074466e6a22033e7ab711011be2 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9ia 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9ia 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.9ia 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ba268036ce89870d0a5fc8ce8f60efed 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0JE 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ba268036ce89870d0a5fc8ce8f60efed 1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ba268036ce89870d0a5fc8ce8f60efed 1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ba268036ce89870d0a5fc8ce8f60efed 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0JE 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0JE 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.0JE 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=16e6941064bc9db8bfdc6894ae591823682c1ebfd2e4bbff 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.MCm 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 16e6941064bc9db8bfdc6894ae591823682c1ebfd2e4bbff 2 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 16e6941064bc9db8bfdc6894ae591823682c1ebfd2e4bbff 2 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=16e6941064bc9db8bfdc6894ae591823682c1ebfd2e4bbff 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.MCm 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.MCm 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.MCm 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e202f923c2509dde1985539b15ca8628b54f70b44647bb6c 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.R70 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e202f923c2509dde1985539b15ca8628b54f70b44647bb6c 2 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e202f923c2509dde1985539b15ca8628b54f70b44647bb6c 2 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e202f923c2509dde1985539b15ca8628b54f70b44647bb6c 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.R70 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.R70 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.R70 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=93757e0fe323d39c2ba0f269b3be515f 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ana 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 93757e0fe323d39c2ba0f269b3be515f 1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 93757e0fe323d39c2ba0f269b3be515f 1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=93757e0fe323d39c2ba0f269b3be515f 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ana 00:17:20.593 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ana 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.ana 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=97689797970dfa82f48f60af668d2b9ac8db465c7fbca6617f88a89c588b1e7e 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.juS 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 97689797970dfa82f48f60af668d2b9ac8db465c7fbca6617f88a89c588b1e7e 3 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 97689797970dfa82f48f60af668d2b9ac8db465c7fbca6617f88a89c588b1e7e 3 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=97689797970dfa82f48f60af668d2b9ac8db465c7fbca6617f88a89c588b1e7e 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.juS 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.juS 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.juS 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 827507 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 827507 ']' 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 827528 /var/tmp/host.sock 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 827528 ']' 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:20.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:20.594 10:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DCQ 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DCQ 00:17:20.855 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DCQ 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.9ia ]] 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9ia 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9ia 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9ia 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0JE 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0JE 00:17:21.115 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0JE 00:17:21.376 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.MCm ]] 00:17:21.376 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MCm 00:17:21.376 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.376 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.376 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.376 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MCm 00:17:21.376 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.MCm 00:17:21.636 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:21.636 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.R70 00:17:21.636 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.636 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.636 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.636 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.R70 00:17:21.636 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.R70 00:17:21.636 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.ana ]] 00:17:21.636 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ana 00:17:21.637 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.637 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.637 10:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.637 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ana 00:17:21.637 10:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ana 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.juS 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.juS 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.juS 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.956 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.216 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:22.216 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.216 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.216 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:22.216 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:22.217 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.217 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.217 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:22.217 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.217 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:22.217 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.217 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.477 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.477 { 00:17:22.477 "cntlid": 1, 00:17:22.477 "qid": 0, 00:17:22.477 "state": "enabled", 00:17:22.477 "listen_address": { 00:17:22.477 "trtype": "TCP", 00:17:22.477 "adrfam": "IPv4", 00:17:22.477 "traddr": "10.0.0.2", 00:17:22.477 "trsvcid": "4420" 00:17:22.477 }, 00:17:22.477 "peer_address": { 00:17:22.477 "trtype": "TCP", 00:17:22.477 "adrfam": "IPv4", 00:17:22.477 "traddr": "10.0.0.1", 00:17:22.477 "trsvcid": "49894" 00:17:22.477 }, 00:17:22.477 "auth": { 00:17:22.477 "state": "completed", 00:17:22.477 "digest": "sha256", 00:17:22.477 "dhgroup": "null" 00:17:22.477 } 00:17:22.477 } 00:17:22.477 ]' 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.477 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.737 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:22.737 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.737 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.737 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.737 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.737 10:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.062 10:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.062 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.062 { 00:17:24.062 "cntlid": 3, 00:17:24.062 "qid": 0, 00:17:24.062 "state": "enabled", 00:17:24.062 "listen_address": { 00:17:24.062 "trtype": "TCP", 00:17:24.062 "adrfam": "IPv4", 00:17:24.062 "traddr": "10.0.0.2", 00:17:24.062 "trsvcid": "4420" 00:17:24.062 }, 00:17:24.062 "peer_address": { 00:17:24.062 "trtype": "TCP", 00:17:24.062 "adrfam": "IPv4", 00:17:24.062 "traddr": "10.0.0.1", 00:17:24.062 "trsvcid": "49928" 00:17:24.062 }, 00:17:24.062 "auth": { 00:17:24.062 "state": "completed", 00:17:24.062 "digest": "sha256", 00:17:24.062 "dhgroup": "null" 00:17:24.062 } 00:17:24.062 } 00:17:24.062 ]' 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:24.062 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.323 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.323 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.323 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.323 10:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.266 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.526 00:17:25.526 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.526 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.526 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.526 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.526 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.526 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.526 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.786 { 00:17:25.786 "cntlid": 5, 00:17:25.786 "qid": 0, 00:17:25.786 "state": "enabled", 00:17:25.786 "listen_address": { 00:17:25.786 "trtype": "TCP", 00:17:25.786 "adrfam": "IPv4", 00:17:25.786 "traddr": "10.0.0.2", 00:17:25.786 "trsvcid": "4420" 00:17:25.786 }, 00:17:25.786 "peer_address": { 00:17:25.786 "trtype": "TCP", 00:17:25.786 "adrfam": "IPv4", 00:17:25.786 "traddr": "10.0.0.1", 00:17:25.786 "trsvcid": "49954" 00:17:25.786 }, 00:17:25.786 "auth": { 00:17:25.786 "state": "completed", 00:17:25.786 "digest": "sha256", 00:17:25.786 "dhgroup": "null" 00:17:25.786 } 00:17:25.786 } 00:17:25.786 ]' 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.786 10:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.047 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:17:26.617 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.617 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.617 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.617 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.617 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.617 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.617 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:26.617 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.878 10:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.878 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.139 { 00:17:27.139 "cntlid": 7, 00:17:27.139 "qid": 0, 00:17:27.139 "state": "enabled", 00:17:27.139 "listen_address": { 00:17:27.139 "trtype": "TCP", 00:17:27.139 "adrfam": "IPv4", 00:17:27.139 "traddr": "10.0.0.2", 00:17:27.139 "trsvcid": "4420" 00:17:27.139 }, 00:17:27.139 "peer_address": { 00:17:27.139 "trtype": "TCP", 00:17:27.139 "adrfam": "IPv4", 00:17:27.139 "traddr": "10.0.0.1", 00:17:27.139 "trsvcid": "56568" 00:17:27.139 }, 00:17:27.139 "auth": { 00:17:27.139 "state": "completed", 00:17:27.139 "digest": "sha256", 00:17:27.139 "dhgroup": "null" 00:17:27.139 } 00:17:27.139 } 00:17:27.139 ]' 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.139 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.400 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:27.400 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.400 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.400 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.400 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.400 10:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.341 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.602 00:17:28.602 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.602 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.602 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.863 { 00:17:28.863 "cntlid": 9, 00:17:28.863 "qid": 0, 00:17:28.863 "state": "enabled", 00:17:28.863 "listen_address": { 00:17:28.863 "trtype": "TCP", 00:17:28.863 "adrfam": "IPv4", 00:17:28.863 "traddr": "10.0.0.2", 00:17:28.863 "trsvcid": "4420" 00:17:28.863 }, 00:17:28.863 "peer_address": { 00:17:28.863 "trtype": "TCP", 00:17:28.863 "adrfam": "IPv4", 00:17:28.863 "traddr": "10.0.0.1", 00:17:28.863 "trsvcid": "56612" 00:17:28.863 }, 00:17:28.863 "auth": { 00:17:28.863 "state": "completed", 00:17:28.863 "digest": "sha256", 00:17:28.863 "dhgroup": "ffdhe2048" 00:17:28.863 } 00:17:28.863 } 00:17:28.863 ]' 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.863 10:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.863 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.863 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.863 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.124 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:17:29.695 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.695 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.695 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.695 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.695 10:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.695 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.695 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.695 10:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.956 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.217 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.217 { 00:17:30.217 "cntlid": 11, 00:17:30.217 "qid": 0, 00:17:30.217 "state": "enabled", 00:17:30.217 "listen_address": { 00:17:30.217 "trtype": "TCP", 00:17:30.217 "adrfam": "IPv4", 00:17:30.217 "traddr": "10.0.0.2", 00:17:30.217 "trsvcid": "4420" 00:17:30.217 }, 00:17:30.217 "peer_address": { 00:17:30.217 "trtype": "TCP", 00:17:30.217 "adrfam": "IPv4", 00:17:30.217 "traddr": "10.0.0.1", 00:17:30.217 "trsvcid": "56646" 00:17:30.217 }, 00:17:30.217 "auth": { 00:17:30.217 "state": "completed", 00:17:30.217 "digest": "sha256", 00:17:30.217 "dhgroup": "ffdhe2048" 00:17:30.217 } 00:17:30.217 } 00:17:30.217 ]' 00:17:30.217 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.478 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.478 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.478 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.478 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.478 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.478 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.478 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.738 10:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:17:31.309 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.309 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.309 10:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.309 10:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.309 10:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.309 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.309 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:31.309 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.570 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.831 00:17:31.831 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.831 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.831 10:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.831 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.831 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.831 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.831 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.831 10:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.831 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.831 { 00:17:31.831 "cntlid": 13, 00:17:31.831 "qid": 0, 00:17:31.831 "state": "enabled", 00:17:31.831 "listen_address": { 00:17:31.831 "trtype": "TCP", 00:17:31.831 "adrfam": "IPv4", 00:17:31.831 "traddr": "10.0.0.2", 00:17:31.831 "trsvcid": "4420" 00:17:31.831 }, 00:17:31.831 "peer_address": { 00:17:31.831 "trtype": "TCP", 00:17:31.831 "adrfam": "IPv4", 00:17:31.831 "traddr": "10.0.0.1", 00:17:31.831 "trsvcid": "56672" 00:17:31.831 }, 00:17:31.831 "auth": { 00:17:31.831 "state": "completed", 00:17:31.831 "digest": "sha256", 00:17:31.831 "dhgroup": "ffdhe2048" 00:17:31.831 } 00:17:31.831 } 00:17:31.831 ]' 00:17:31.831 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.092 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.092 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.092 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.092 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.092 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.092 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.092 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.353 10:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:17:32.926 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.926 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:32.926 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.926 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.926 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.926 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.926 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:32.926 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.186 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.447 00:17:33.447 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.447 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.447 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.448 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.448 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.448 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.448 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.448 10:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.448 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.448 { 00:17:33.448 "cntlid": 15, 00:17:33.448 "qid": 0, 00:17:33.448 "state": "enabled", 00:17:33.448 "listen_address": { 00:17:33.448 "trtype": "TCP", 00:17:33.448 "adrfam": "IPv4", 00:17:33.448 "traddr": "10.0.0.2", 00:17:33.448 "trsvcid": "4420" 00:17:33.448 }, 00:17:33.448 "peer_address": { 00:17:33.448 "trtype": "TCP", 00:17:33.448 "adrfam": "IPv4", 00:17:33.448 "traddr": "10.0.0.1", 00:17:33.448 "trsvcid": "56700" 00:17:33.448 }, 00:17:33.448 "auth": { 00:17:33.448 "state": "completed", 00:17:33.448 "digest": "sha256", 00:17:33.448 "dhgroup": "ffdhe2048" 00:17:33.448 } 00:17:33.448 } 00:17:33.448 ]' 00:17:33.448 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.448 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.448 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.708 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.708 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.708 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.708 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.708 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.709 10:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.651 10:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.912 00:17:34.912 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.912 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.912 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.172 { 00:17:35.172 "cntlid": 17, 00:17:35.172 "qid": 0, 00:17:35.172 "state": "enabled", 00:17:35.172 "listen_address": { 00:17:35.172 "trtype": "TCP", 00:17:35.172 "adrfam": "IPv4", 00:17:35.172 "traddr": "10.0.0.2", 00:17:35.172 "trsvcid": "4420" 00:17:35.172 }, 00:17:35.172 "peer_address": { 00:17:35.172 "trtype": "TCP", 00:17:35.172 "adrfam": "IPv4", 00:17:35.172 "traddr": "10.0.0.1", 00:17:35.172 "trsvcid": "56720" 00:17:35.172 }, 00:17:35.172 "auth": { 00:17:35.172 "state": "completed", 00:17:35.172 "digest": "sha256", 00:17:35.172 "dhgroup": "ffdhe3072" 00:17:35.172 } 00:17:35.172 } 00:17:35.172 ]' 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.172 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.173 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.173 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.173 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.173 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.433 10:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:17:36.005 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.005 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.005 10:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.005 10:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.005 10:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.005 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.005 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.005 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.266 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.528 00:17:36.528 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.528 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.528 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.847 { 00:17:36.847 "cntlid": 19, 00:17:36.847 "qid": 0, 00:17:36.847 "state": "enabled", 00:17:36.847 "listen_address": { 00:17:36.847 "trtype": "TCP", 00:17:36.847 "adrfam": "IPv4", 00:17:36.847 "traddr": "10.0.0.2", 00:17:36.847 "trsvcid": "4420" 00:17:36.847 }, 00:17:36.847 "peer_address": { 00:17:36.847 "trtype": "TCP", 00:17:36.847 "adrfam": "IPv4", 00:17:36.847 "traddr": "10.0.0.1", 00:17:36.847 "trsvcid": "41328" 00:17:36.847 }, 00:17:36.847 "auth": { 00:17:36.847 "state": "completed", 00:17:36.847 "digest": "sha256", 00:17:36.847 "dhgroup": "ffdhe3072" 00:17:36.847 } 00:17:36.847 } 00:17:36.847 ]' 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.847 10:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.847 10:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:17:37.878 10:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.878 10:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:37.878 10:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.878 10:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.878 10:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.878 10:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.878 10:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.878 10:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.878 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.139 00:17:38.139 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.139 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.139 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.399 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.399 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.399 10:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.399 10:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.399 10:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.399 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.399 { 00:17:38.399 "cntlid": 21, 00:17:38.399 "qid": 0, 00:17:38.399 "state": "enabled", 00:17:38.399 "listen_address": { 00:17:38.399 "trtype": "TCP", 00:17:38.399 "adrfam": "IPv4", 00:17:38.399 "traddr": "10.0.0.2", 00:17:38.399 "trsvcid": "4420" 00:17:38.399 }, 00:17:38.399 "peer_address": { 00:17:38.399 "trtype": "TCP", 00:17:38.399 "adrfam": "IPv4", 00:17:38.399 "traddr": "10.0.0.1", 00:17:38.399 "trsvcid": "41344" 00:17:38.399 }, 00:17:38.399 "auth": { 00:17:38.399 "state": "completed", 00:17:38.399 "digest": "sha256", 00:17:38.399 "dhgroup": "ffdhe3072" 00:17:38.399 } 00:17:38.399 } 00:17:38.399 ]' 00:17:38.399 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.399 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.399 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.400 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.400 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.400 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.400 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.400 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.661 10:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:17:39.233 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.233 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:39.233 10:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.233 10:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.233 10:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.233 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.233 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.233 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.494 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.755 00:17:39.755 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.755 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.755 10:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.755 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.755 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.755 10:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.755 10:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.015 10:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.015 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.015 { 00:17:40.015 "cntlid": 23, 00:17:40.015 "qid": 0, 00:17:40.015 "state": "enabled", 00:17:40.015 "listen_address": { 00:17:40.015 "trtype": "TCP", 00:17:40.015 "adrfam": "IPv4", 00:17:40.015 "traddr": "10.0.0.2", 00:17:40.015 "trsvcid": "4420" 00:17:40.015 }, 00:17:40.015 "peer_address": { 00:17:40.015 "trtype": "TCP", 00:17:40.015 "adrfam": "IPv4", 00:17:40.015 "traddr": "10.0.0.1", 00:17:40.016 "trsvcid": "41376" 00:17:40.016 }, 00:17:40.016 "auth": { 00:17:40.016 "state": "completed", 00:17:40.016 "digest": "sha256", 00:17:40.016 "dhgroup": "ffdhe3072" 00:17:40.016 } 00:17:40.016 } 00:17:40.016 ]' 00:17:40.016 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.016 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.016 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.016 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.016 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.016 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.016 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.016 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.275 10:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:17:40.845 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.845 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:40.845 10:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.845 10:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.845 10:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.845 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.845 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.845 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:40.845 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.106 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.367 00:17:41.367 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.367 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.367 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.367 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.367 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.367 10:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.367 10:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.628 { 00:17:41.628 "cntlid": 25, 00:17:41.628 "qid": 0, 00:17:41.628 "state": "enabled", 00:17:41.628 "listen_address": { 00:17:41.628 "trtype": "TCP", 00:17:41.628 "adrfam": "IPv4", 00:17:41.628 "traddr": "10.0.0.2", 00:17:41.628 "trsvcid": "4420" 00:17:41.628 }, 00:17:41.628 "peer_address": { 00:17:41.628 "trtype": "TCP", 00:17:41.628 "adrfam": "IPv4", 00:17:41.628 "traddr": "10.0.0.1", 00:17:41.628 "trsvcid": "41408" 00:17:41.628 }, 00:17:41.628 "auth": { 00:17:41.628 "state": "completed", 00:17:41.628 "digest": "sha256", 00:17:41.628 "dhgroup": "ffdhe4096" 00:17:41.628 } 00:17:41.628 } 00:17:41.628 ]' 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.628 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.888 10:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:17:42.460 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.460 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.460 10:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.460 10:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.460 10:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.460 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.460 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.460 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.721 10:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.981 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.982 { 00:17:42.982 "cntlid": 27, 00:17:42.982 "qid": 0, 00:17:42.982 "state": "enabled", 00:17:42.982 "listen_address": { 00:17:42.982 "trtype": "TCP", 00:17:42.982 "adrfam": "IPv4", 00:17:42.982 "traddr": "10.0.0.2", 00:17:42.982 "trsvcid": "4420" 00:17:42.982 }, 00:17:42.982 "peer_address": { 00:17:42.982 "trtype": "TCP", 00:17:42.982 "adrfam": "IPv4", 00:17:42.982 "traddr": "10.0.0.1", 00:17:42.982 "trsvcid": "41430" 00:17:42.982 }, 00:17:42.982 "auth": { 00:17:42.982 "state": "completed", 00:17:42.982 "digest": "sha256", 00:17:42.982 "dhgroup": "ffdhe4096" 00:17:42.982 } 00:17:42.982 } 00:17:42.982 ]' 00:17:42.982 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.243 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.243 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.243 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:43.243 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.243 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.243 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.243 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.503 10:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:17:44.074 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.074 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:44.074 10:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.074 10:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.074 10:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.074 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.074 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.074 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.334 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.595 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.595 { 00:17:44.595 "cntlid": 29, 00:17:44.595 "qid": 0, 00:17:44.595 "state": "enabled", 00:17:44.595 "listen_address": { 00:17:44.595 "trtype": "TCP", 00:17:44.595 "adrfam": "IPv4", 00:17:44.595 "traddr": "10.0.0.2", 00:17:44.595 "trsvcid": "4420" 00:17:44.595 }, 00:17:44.595 "peer_address": { 00:17:44.595 "trtype": "TCP", 00:17:44.595 "adrfam": "IPv4", 00:17:44.595 "traddr": "10.0.0.1", 00:17:44.595 "trsvcid": "41448" 00:17:44.595 }, 00:17:44.595 "auth": { 00:17:44.595 "state": "completed", 00:17:44.595 "digest": "sha256", 00:17:44.595 "dhgroup": "ffdhe4096" 00:17:44.595 } 00:17:44.595 } 00:17:44.595 ]' 00:17:44.595 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.856 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.856 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.856 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.856 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.856 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.856 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.856 10:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.116 10:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:17:45.687 10:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.687 10:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:45.687 10:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.687 10:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.687 10:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.687 10:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.687 10:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.687 10:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.948 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.208 00:17:46.208 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.208 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.208 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.208 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.208 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.208 10:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.208 10:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.208 10:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.208 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.208 { 00:17:46.208 "cntlid": 31, 00:17:46.208 "qid": 0, 00:17:46.208 "state": "enabled", 00:17:46.208 "listen_address": { 00:17:46.208 "trtype": "TCP", 00:17:46.208 "adrfam": "IPv4", 00:17:46.208 "traddr": "10.0.0.2", 00:17:46.208 "trsvcid": "4420" 00:17:46.208 }, 00:17:46.208 "peer_address": { 00:17:46.208 "trtype": "TCP", 00:17:46.208 "adrfam": "IPv4", 00:17:46.208 "traddr": "10.0.0.1", 00:17:46.208 "trsvcid": "41474" 00:17:46.208 }, 00:17:46.208 "auth": { 00:17:46.208 "state": "completed", 00:17:46.208 "digest": "sha256", 00:17:46.208 "dhgroup": "ffdhe4096" 00:17:46.208 } 00:17:46.208 } 00:17:46.208 ]' 00:17:46.469 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.469 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.469 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.469 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.469 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.469 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.469 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.469 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.729 10:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:17:47.300 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.300 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:47.300 10:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.300 10:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.300 10:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.300 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.300 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.300 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.300 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.560 10:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.820 00:17:47.820 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.820 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.820 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.080 { 00:17:48.080 "cntlid": 33, 00:17:48.080 "qid": 0, 00:17:48.080 "state": "enabled", 00:17:48.080 "listen_address": { 00:17:48.080 "trtype": "TCP", 00:17:48.080 "adrfam": "IPv4", 00:17:48.080 "traddr": "10.0.0.2", 00:17:48.080 "trsvcid": "4420" 00:17:48.080 }, 00:17:48.080 "peer_address": { 00:17:48.080 "trtype": "TCP", 00:17:48.080 "adrfam": "IPv4", 00:17:48.080 "traddr": "10.0.0.1", 00:17:48.080 "trsvcid": "34036" 00:17:48.080 }, 00:17:48.080 "auth": { 00:17:48.080 "state": "completed", 00:17:48.080 "digest": "sha256", 00:17:48.080 "dhgroup": "ffdhe6144" 00:17:48.080 } 00:17:48.080 } 00:17:48.080 ]' 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.080 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.340 10:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.280 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.540 00:17:49.540 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.540 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.540 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.802 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.802 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.802 10:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.802 10:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.802 10:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.802 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.802 { 00:17:49.802 "cntlid": 35, 00:17:49.802 "qid": 0, 00:17:49.802 "state": "enabled", 00:17:49.802 "listen_address": { 00:17:49.802 "trtype": "TCP", 00:17:49.802 "adrfam": "IPv4", 00:17:49.802 "traddr": "10.0.0.2", 00:17:49.802 "trsvcid": "4420" 00:17:49.802 }, 00:17:49.802 "peer_address": { 00:17:49.802 "trtype": "TCP", 00:17:49.802 "adrfam": "IPv4", 00:17:49.802 "traddr": "10.0.0.1", 00:17:49.802 "trsvcid": "34060" 00:17:49.802 }, 00:17:49.802 "auth": { 00:17:49.802 "state": "completed", 00:17:49.802 "digest": "sha256", 00:17:49.802 "dhgroup": "ffdhe6144" 00:17:49.802 } 00:17:49.802 } 00:17:49.802 ]' 00:17:49.802 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.802 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.802 10:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.802 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.802 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.802 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.802 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.802 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.062 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:17:51.002 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.002 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.002 10:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.002 10:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.002 10:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.002 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.002 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.002 10:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.002 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.262 00:17:51.262 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.262 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.262 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.523 { 00:17:51.523 "cntlid": 37, 00:17:51.523 "qid": 0, 00:17:51.523 "state": "enabled", 00:17:51.523 "listen_address": { 00:17:51.523 "trtype": "TCP", 00:17:51.523 "adrfam": "IPv4", 00:17:51.523 "traddr": "10.0.0.2", 00:17:51.523 "trsvcid": "4420" 00:17:51.523 }, 00:17:51.523 "peer_address": { 00:17:51.523 "trtype": "TCP", 00:17:51.523 "adrfam": "IPv4", 00:17:51.523 "traddr": "10.0.0.1", 00:17:51.523 "trsvcid": "34092" 00:17:51.523 }, 00:17:51.523 "auth": { 00:17:51.523 "state": "completed", 00:17:51.523 "digest": "sha256", 00:17:51.523 "dhgroup": "ffdhe6144" 00:17:51.523 } 00:17:51.523 } 00:17:51.523 ]' 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.523 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.783 10:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:17:52.354 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.354 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.354 10:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.354 10:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.614 10:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.874 00:17:53.135 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.136 { 00:17:53.136 "cntlid": 39, 00:17:53.136 "qid": 0, 00:17:53.136 "state": "enabled", 00:17:53.136 "listen_address": { 00:17:53.136 "trtype": "TCP", 00:17:53.136 "adrfam": "IPv4", 00:17:53.136 "traddr": "10.0.0.2", 00:17:53.136 "trsvcid": "4420" 00:17:53.136 }, 00:17:53.136 "peer_address": { 00:17:53.136 "trtype": "TCP", 00:17:53.136 "adrfam": "IPv4", 00:17:53.136 "traddr": "10.0.0.1", 00:17:53.136 "trsvcid": "34122" 00:17:53.136 }, 00:17:53.136 "auth": { 00:17:53.136 "state": "completed", 00:17:53.136 "digest": "sha256", 00:17:53.136 "dhgroup": "ffdhe6144" 00:17:53.136 } 00:17:53.136 } 00:17:53.136 ]' 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.136 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.398 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.398 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.398 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.398 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.398 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.398 10:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.340 10:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.918 00:17:54.918 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.919 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.919 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.182 { 00:17:55.182 "cntlid": 41, 00:17:55.182 "qid": 0, 00:17:55.182 "state": "enabled", 00:17:55.182 "listen_address": { 00:17:55.182 "trtype": "TCP", 00:17:55.182 "adrfam": "IPv4", 00:17:55.182 "traddr": "10.0.0.2", 00:17:55.182 "trsvcid": "4420" 00:17:55.182 }, 00:17:55.182 "peer_address": { 00:17:55.182 "trtype": "TCP", 00:17:55.182 "adrfam": "IPv4", 00:17:55.182 "traddr": "10.0.0.1", 00:17:55.182 "trsvcid": "34148" 00:17:55.182 }, 00:17:55.182 "auth": { 00:17:55.182 "state": "completed", 00:17:55.182 "digest": "sha256", 00:17:55.182 "dhgroup": "ffdhe8192" 00:17:55.182 } 00:17:55.182 } 00:17:55.182 ]' 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.182 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.442 10:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:17:56.013 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.013 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.013 10:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.013 10:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.013 10:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.013 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.013 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.013 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.273 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.844 00:17:56.844 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.844 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.844 10:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.105 { 00:17:57.105 "cntlid": 43, 00:17:57.105 "qid": 0, 00:17:57.105 "state": "enabled", 00:17:57.105 "listen_address": { 00:17:57.105 "trtype": "TCP", 00:17:57.105 "adrfam": "IPv4", 00:17:57.105 "traddr": "10.0.0.2", 00:17:57.105 "trsvcid": "4420" 00:17:57.105 }, 00:17:57.105 "peer_address": { 00:17:57.105 "trtype": "TCP", 00:17:57.105 "adrfam": "IPv4", 00:17:57.105 "traddr": "10.0.0.1", 00:17:57.105 "trsvcid": "33244" 00:17:57.105 }, 00:17:57.105 "auth": { 00:17:57.105 "state": "completed", 00:17:57.105 "digest": "sha256", 00:17:57.105 "dhgroup": "ffdhe8192" 00:17:57.105 } 00:17:57.105 } 00:17:57.105 ]' 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.105 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.365 10:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:17:57.936 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.936 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:57.936 10:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.936 10:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.936 10:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.936 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.936 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.936 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.195 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.196 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.765 00:17:58.765 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.765 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.765 10:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.765 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.765 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.765 10:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.766 10:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.766 10:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.766 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.766 { 00:17:58.766 "cntlid": 45, 00:17:58.766 "qid": 0, 00:17:58.766 "state": "enabled", 00:17:58.766 "listen_address": { 00:17:58.766 "trtype": "TCP", 00:17:58.766 "adrfam": "IPv4", 00:17:58.766 "traddr": "10.0.0.2", 00:17:58.766 "trsvcid": "4420" 00:17:58.766 }, 00:17:58.766 "peer_address": { 00:17:58.766 "trtype": "TCP", 00:17:58.766 "adrfam": "IPv4", 00:17:58.766 "traddr": "10.0.0.1", 00:17:58.766 "trsvcid": "33272" 00:17:58.766 }, 00:17:58.766 "auth": { 00:17:58.766 "state": "completed", 00:17:58.766 "digest": "sha256", 00:17:58.766 "dhgroup": "ffdhe8192" 00:17:58.766 } 00:17:58.766 } 00:17:58.766 ]' 00:17:59.026 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.026 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.026 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.026 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.026 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.026 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.026 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.026 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.286 10:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:17:59.857 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.857 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:59.857 10:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.857 10:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.857 10:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.857 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.857 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.857 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.117 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.688 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.688 { 00:18:00.688 "cntlid": 47, 00:18:00.688 "qid": 0, 00:18:00.688 "state": "enabled", 00:18:00.688 "listen_address": { 00:18:00.688 "trtype": "TCP", 00:18:00.688 "adrfam": "IPv4", 00:18:00.688 "traddr": "10.0.0.2", 00:18:00.688 "trsvcid": "4420" 00:18:00.688 }, 00:18:00.688 "peer_address": { 00:18:00.688 "trtype": "TCP", 00:18:00.688 "adrfam": "IPv4", 00:18:00.688 "traddr": "10.0.0.1", 00:18:00.688 "trsvcid": "33300" 00:18:00.688 }, 00:18:00.688 "auth": { 00:18:00.688 "state": "completed", 00:18:00.688 "digest": "sha256", 00:18:00.688 "dhgroup": "ffdhe8192" 00:18:00.688 } 00:18:00.688 } 00:18:00.688 ]' 00:18:00.688 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.948 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.948 10:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.948 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.948 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.948 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.948 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.948 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.209 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.780 10:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.041 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:02.041 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.041 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.041 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:02.041 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:02.041 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.041 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.041 10:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.041 10:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.042 10:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.042 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.042 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.303 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.303 { 00:18:02.303 "cntlid": 49, 00:18:02.303 "qid": 0, 00:18:02.303 "state": "enabled", 00:18:02.303 "listen_address": { 00:18:02.303 "trtype": "TCP", 00:18:02.303 "adrfam": "IPv4", 00:18:02.303 "traddr": "10.0.0.2", 00:18:02.303 "trsvcid": "4420" 00:18:02.303 }, 00:18:02.303 "peer_address": { 00:18:02.303 "trtype": "TCP", 00:18:02.303 "adrfam": "IPv4", 00:18:02.303 "traddr": "10.0.0.1", 00:18:02.303 "trsvcid": "33322" 00:18:02.303 }, 00:18:02.303 "auth": { 00:18:02.303 "state": "completed", 00:18:02.303 "digest": "sha384", 00:18:02.303 "dhgroup": "null" 00:18:02.303 } 00:18:02.303 } 00:18:02.303 ]' 00:18:02.303 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.564 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.564 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.564 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.564 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.564 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.564 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.564 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.825 10:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:18:03.397 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.397 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.397 10:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.397 10:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.397 10:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.397 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.397 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.397 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.658 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.658 00:18:03.920 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.920 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.920 10:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.920 { 00:18:03.920 "cntlid": 51, 00:18:03.920 "qid": 0, 00:18:03.920 "state": "enabled", 00:18:03.920 "listen_address": { 00:18:03.920 "trtype": "TCP", 00:18:03.920 "adrfam": "IPv4", 00:18:03.920 "traddr": "10.0.0.2", 00:18:03.920 "trsvcid": "4420" 00:18:03.920 }, 00:18:03.920 "peer_address": { 00:18:03.920 "trtype": "TCP", 00:18:03.920 "adrfam": "IPv4", 00:18:03.920 "traddr": "10.0.0.1", 00:18:03.920 "trsvcid": "33336" 00:18:03.920 }, 00:18:03.920 "auth": { 00:18:03.920 "state": "completed", 00:18:03.920 "digest": "sha384", 00:18:03.920 "dhgroup": "null" 00:18:03.920 } 00:18:03.920 } 00:18:03.920 ]' 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.920 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:04.183 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.183 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.183 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.183 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.183 10:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.127 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.389 00:18:05.389 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.389 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.389 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.651 { 00:18:05.651 "cntlid": 53, 00:18:05.651 "qid": 0, 00:18:05.651 "state": "enabled", 00:18:05.651 "listen_address": { 00:18:05.651 "trtype": "TCP", 00:18:05.651 "adrfam": "IPv4", 00:18:05.651 "traddr": "10.0.0.2", 00:18:05.651 "trsvcid": "4420" 00:18:05.651 }, 00:18:05.651 "peer_address": { 00:18:05.651 "trtype": "TCP", 00:18:05.651 "adrfam": "IPv4", 00:18:05.651 "traddr": "10.0.0.1", 00:18:05.651 "trsvcid": "33360" 00:18:05.651 }, 00:18:05.651 "auth": { 00:18:05.651 "state": "completed", 00:18:05.651 "digest": "sha384", 00:18:05.651 "dhgroup": "null" 00:18:05.651 } 00:18:05.651 } 00:18:05.651 ]' 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.651 10:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.913 10:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:18:06.487 10:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.764 10:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.764 10:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.764 10:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.764 10:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.764 10:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.764 10:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.764 10:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.764 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.094 00:18:07.094 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.094 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.094 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.383 { 00:18:07.383 "cntlid": 55, 00:18:07.383 "qid": 0, 00:18:07.383 "state": "enabled", 00:18:07.383 "listen_address": { 00:18:07.383 "trtype": "TCP", 00:18:07.383 "adrfam": "IPv4", 00:18:07.383 "traddr": "10.0.0.2", 00:18:07.383 "trsvcid": "4420" 00:18:07.383 }, 00:18:07.383 "peer_address": { 00:18:07.383 "trtype": "TCP", 00:18:07.383 "adrfam": "IPv4", 00:18:07.383 "traddr": "10.0.0.1", 00:18:07.383 "trsvcid": "55846" 00:18:07.383 }, 00:18:07.383 "auth": { 00:18:07.383 "state": "completed", 00:18:07.383 "digest": "sha384", 00:18:07.383 "dhgroup": "null" 00:18:07.383 } 00:18:07.383 } 00:18:07.383 ]' 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.383 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.643 10:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:18:08.214 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.214 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.214 10:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.214 10:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.214 10:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.214 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.214 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.214 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.214 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.474 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.735 00:18:08.735 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.735 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.735 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.735 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.735 10:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.735 10:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.735 10:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.735 10:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.735 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.735 { 00:18:08.735 "cntlid": 57, 00:18:08.735 "qid": 0, 00:18:08.735 "state": "enabled", 00:18:08.735 "listen_address": { 00:18:08.735 "trtype": "TCP", 00:18:08.735 "adrfam": "IPv4", 00:18:08.735 "traddr": "10.0.0.2", 00:18:08.735 "trsvcid": "4420" 00:18:08.735 }, 00:18:08.735 "peer_address": { 00:18:08.735 "trtype": "TCP", 00:18:08.735 "adrfam": "IPv4", 00:18:08.735 "traddr": "10.0.0.1", 00:18:08.735 "trsvcid": "55878" 00:18:08.735 }, 00:18:08.735 "auth": { 00:18:08.735 "state": "completed", 00:18:08.735 "digest": "sha384", 00:18:08.735 "dhgroup": "ffdhe2048" 00:18:08.735 } 00:18:08.735 } 00:18:08.735 ]' 00:18:08.735 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.996 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.996 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.996 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.996 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.996 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.996 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.996 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.996 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:18:09.939 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.939 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.939 10:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.939 10:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.939 10:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.939 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.939 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.939 10:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.939 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.200 00:18:10.200 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.200 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.200 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.461 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.461 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.461 10:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.461 10:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.461 10:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.461 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.461 { 00:18:10.461 "cntlid": 59, 00:18:10.462 "qid": 0, 00:18:10.462 "state": "enabled", 00:18:10.462 "listen_address": { 00:18:10.462 "trtype": "TCP", 00:18:10.462 "adrfam": "IPv4", 00:18:10.462 "traddr": "10.0.0.2", 00:18:10.462 "trsvcid": "4420" 00:18:10.462 }, 00:18:10.462 "peer_address": { 00:18:10.462 "trtype": "TCP", 00:18:10.462 "adrfam": "IPv4", 00:18:10.462 "traddr": "10.0.0.1", 00:18:10.462 "trsvcid": "55908" 00:18:10.462 }, 00:18:10.462 "auth": { 00:18:10.462 "state": "completed", 00:18:10.462 "digest": "sha384", 00:18:10.462 "dhgroup": "ffdhe2048" 00:18:10.462 } 00:18:10.462 } 00:18:10.462 ]' 00:18:10.462 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.462 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.462 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.462 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.462 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.462 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.462 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.462 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.722 10:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:18:11.294 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.294 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.294 10:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.294 10:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.294 10:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.294 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.294 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.294 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.556 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.817 00:18:11.817 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.818 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.818 10:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.080 { 00:18:12.080 "cntlid": 61, 00:18:12.080 "qid": 0, 00:18:12.080 "state": "enabled", 00:18:12.080 "listen_address": { 00:18:12.080 "trtype": "TCP", 00:18:12.080 "adrfam": "IPv4", 00:18:12.080 "traddr": "10.0.0.2", 00:18:12.080 "trsvcid": "4420" 00:18:12.080 }, 00:18:12.080 "peer_address": { 00:18:12.080 "trtype": "TCP", 00:18:12.080 "adrfam": "IPv4", 00:18:12.080 "traddr": "10.0.0.1", 00:18:12.080 "trsvcid": "55932" 00:18:12.080 }, 00:18:12.080 "auth": { 00:18:12.080 "state": "completed", 00:18:12.080 "digest": "sha384", 00:18:12.080 "dhgroup": "ffdhe2048" 00:18:12.080 } 00:18:12.080 } 00:18:12.080 ]' 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.080 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.341 10:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:18:12.914 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.914 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.914 10:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.914 10:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.914 10:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.914 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.914 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:12.914 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.176 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.437 00:18:13.437 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.437 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.437 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.698 { 00:18:13.698 "cntlid": 63, 00:18:13.698 "qid": 0, 00:18:13.698 "state": "enabled", 00:18:13.698 "listen_address": { 00:18:13.698 "trtype": "TCP", 00:18:13.698 "adrfam": "IPv4", 00:18:13.698 "traddr": "10.0.0.2", 00:18:13.698 "trsvcid": "4420" 00:18:13.698 }, 00:18:13.698 "peer_address": { 00:18:13.698 "trtype": "TCP", 00:18:13.698 "adrfam": "IPv4", 00:18:13.698 "traddr": "10.0.0.1", 00:18:13.698 "trsvcid": "55956" 00:18:13.698 }, 00:18:13.698 "auth": { 00:18:13.698 "state": "completed", 00:18:13.698 "digest": "sha384", 00:18:13.698 "dhgroup": "ffdhe2048" 00:18:13.698 } 00:18:13.698 } 00:18:13.698 ]' 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.698 10:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.959 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:18:14.532 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.532 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:14.532 10:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.532 10:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.532 10:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.532 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.532 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.532 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.532 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.793 10:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.054 00:18:15.054 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.054 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.054 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.315 { 00:18:15.315 "cntlid": 65, 00:18:15.315 "qid": 0, 00:18:15.315 "state": "enabled", 00:18:15.315 "listen_address": { 00:18:15.315 "trtype": "TCP", 00:18:15.315 "adrfam": "IPv4", 00:18:15.315 "traddr": "10.0.0.2", 00:18:15.315 "trsvcid": "4420" 00:18:15.315 }, 00:18:15.315 "peer_address": { 00:18:15.315 "trtype": "TCP", 00:18:15.315 "adrfam": "IPv4", 00:18:15.315 "traddr": "10.0.0.1", 00:18:15.315 "trsvcid": "55986" 00:18:15.315 }, 00:18:15.315 "auth": { 00:18:15.315 "state": "completed", 00:18:15.315 "digest": "sha384", 00:18:15.315 "dhgroup": "ffdhe3072" 00:18:15.315 } 00:18:15.315 } 00:18:15.315 ]' 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.315 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.576 10:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:18:16.150 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.150 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.150 10:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.150 10:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.150 10:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.150 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.150 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:16.150 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.411 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.672 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.672 { 00:18:16.672 "cntlid": 67, 00:18:16.672 "qid": 0, 00:18:16.672 "state": "enabled", 00:18:16.672 "listen_address": { 00:18:16.672 "trtype": "TCP", 00:18:16.672 "adrfam": "IPv4", 00:18:16.672 "traddr": "10.0.0.2", 00:18:16.672 "trsvcid": "4420" 00:18:16.672 }, 00:18:16.672 "peer_address": { 00:18:16.672 "trtype": "TCP", 00:18:16.672 "adrfam": "IPv4", 00:18:16.672 "traddr": "10.0.0.1", 00:18:16.672 "trsvcid": "49580" 00:18:16.672 }, 00:18:16.672 "auth": { 00:18:16.672 "state": "completed", 00:18:16.672 "digest": "sha384", 00:18:16.672 "dhgroup": "ffdhe3072" 00:18:16.672 } 00:18:16.672 } 00:18:16.672 ]' 00:18:16.672 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.934 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.934 10:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.934 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.934 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.934 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.934 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.934 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.195 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:18:17.767 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.767 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:17.767 10:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.767 10:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.767 10:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.767 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.767 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:17.767 10:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.027 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.287 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.287 { 00:18:18.287 "cntlid": 69, 00:18:18.287 "qid": 0, 00:18:18.287 "state": "enabled", 00:18:18.287 "listen_address": { 00:18:18.287 "trtype": "TCP", 00:18:18.287 "adrfam": "IPv4", 00:18:18.287 "traddr": "10.0.0.2", 00:18:18.287 "trsvcid": "4420" 00:18:18.287 }, 00:18:18.287 "peer_address": { 00:18:18.287 "trtype": "TCP", 00:18:18.287 "adrfam": "IPv4", 00:18:18.287 "traddr": "10.0.0.1", 00:18:18.287 "trsvcid": "49608" 00:18:18.287 }, 00:18:18.287 "auth": { 00:18:18.287 "state": "completed", 00:18:18.287 "digest": "sha384", 00:18:18.287 "dhgroup": "ffdhe3072" 00:18:18.287 } 00:18:18.287 } 00:18:18.287 ]' 00:18:18.287 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.547 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.547 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.547 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.547 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.547 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.547 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.547 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.547 10:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.489 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.749 00:18:19.749 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.749 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.749 10:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.009 { 00:18:20.009 "cntlid": 71, 00:18:20.009 "qid": 0, 00:18:20.009 "state": "enabled", 00:18:20.009 "listen_address": { 00:18:20.009 "trtype": "TCP", 00:18:20.009 "adrfam": "IPv4", 00:18:20.009 "traddr": "10.0.0.2", 00:18:20.009 "trsvcid": "4420" 00:18:20.009 }, 00:18:20.009 "peer_address": { 00:18:20.009 "trtype": "TCP", 00:18:20.009 "adrfam": "IPv4", 00:18:20.009 "traddr": "10.0.0.1", 00:18:20.009 "trsvcid": "49630" 00:18:20.009 }, 00:18:20.009 "auth": { 00:18:20.009 "state": "completed", 00:18:20.009 "digest": "sha384", 00:18:20.009 "dhgroup": "ffdhe3072" 00:18:20.009 } 00:18:20.009 } 00:18:20.009 ]' 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:20.009 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.010 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.010 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.010 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.270 10:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:18:21.211 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.211 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.211 10:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.211 10:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.212 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.472 00:18:21.472 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.472 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.472 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.472 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.472 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.472 10:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.472 10:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.732 10:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.732 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.732 { 00:18:21.732 "cntlid": 73, 00:18:21.732 "qid": 0, 00:18:21.732 "state": "enabled", 00:18:21.732 "listen_address": { 00:18:21.732 "trtype": "TCP", 00:18:21.732 "adrfam": "IPv4", 00:18:21.732 "traddr": "10.0.0.2", 00:18:21.732 "trsvcid": "4420" 00:18:21.732 }, 00:18:21.732 "peer_address": { 00:18:21.732 "trtype": "TCP", 00:18:21.732 "adrfam": "IPv4", 00:18:21.732 "traddr": "10.0.0.1", 00:18:21.732 "trsvcid": "49672" 00:18:21.732 }, 00:18:21.732 "auth": { 00:18:21.732 "state": "completed", 00:18:21.732 "digest": "sha384", 00:18:21.732 "dhgroup": "ffdhe4096" 00:18:21.732 } 00:18:21.732 } 00:18:21.732 ]' 00:18:21.733 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.733 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.733 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.733 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.733 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.733 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.733 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.733 10:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.992 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:18:22.563 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.563 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.563 10:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.563 10:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.563 10:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.563 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.563 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.563 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.824 10:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.086 00:18:23.086 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.086 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.086 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.347 { 00:18:23.347 "cntlid": 75, 00:18:23.347 "qid": 0, 00:18:23.347 "state": "enabled", 00:18:23.347 "listen_address": { 00:18:23.347 "trtype": "TCP", 00:18:23.347 "adrfam": "IPv4", 00:18:23.347 "traddr": "10.0.0.2", 00:18:23.347 "trsvcid": "4420" 00:18:23.347 }, 00:18:23.347 "peer_address": { 00:18:23.347 "trtype": "TCP", 00:18:23.347 "adrfam": "IPv4", 00:18:23.347 "traddr": "10.0.0.1", 00:18:23.347 "trsvcid": "49694" 00:18:23.347 }, 00:18:23.347 "auth": { 00:18:23.347 "state": "completed", 00:18:23.347 "digest": "sha384", 00:18:23.347 "dhgroup": "ffdhe4096" 00:18:23.347 } 00:18:23.347 } 00:18:23.347 ]' 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.347 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.607 10:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:18:24.180 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.180 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.180 10:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.180 10:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.180 10:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.180 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.180 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:24.180 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.441 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.702 00:18:24.702 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.702 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.702 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.962 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.962 10:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.962 10:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.962 10:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.962 10:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.962 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.962 { 00:18:24.962 "cntlid": 77, 00:18:24.962 "qid": 0, 00:18:24.962 "state": "enabled", 00:18:24.962 "listen_address": { 00:18:24.962 "trtype": "TCP", 00:18:24.962 "adrfam": "IPv4", 00:18:24.962 "traddr": "10.0.0.2", 00:18:24.962 "trsvcid": "4420" 00:18:24.962 }, 00:18:24.962 "peer_address": { 00:18:24.962 "trtype": "TCP", 00:18:24.962 "adrfam": "IPv4", 00:18:24.962 "traddr": "10.0.0.1", 00:18:24.962 "trsvcid": "49706" 00:18:24.962 }, 00:18:24.962 "auth": { 00:18:24.962 "state": "completed", 00:18:24.962 "digest": "sha384", 00:18:24.962 "dhgroup": "ffdhe4096" 00:18:24.962 } 00:18:24.962 } 00:18:24.962 ]' 00:18:24.962 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.962 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.962 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.962 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.962 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.963 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.963 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.963 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.223 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:18:25.795 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.795 10:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.795 10:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.795 10:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.795 10:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.795 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.795 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.795 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.056 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.316 00:18:26.316 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.316 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.316 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.577 { 00:18:26.577 "cntlid": 79, 00:18:26.577 "qid": 0, 00:18:26.577 "state": "enabled", 00:18:26.577 "listen_address": { 00:18:26.577 "trtype": "TCP", 00:18:26.577 "adrfam": "IPv4", 00:18:26.577 "traddr": "10.0.0.2", 00:18:26.577 "trsvcid": "4420" 00:18:26.577 }, 00:18:26.577 "peer_address": { 00:18:26.577 "trtype": "TCP", 00:18:26.577 "adrfam": "IPv4", 00:18:26.577 "traddr": "10.0.0.1", 00:18:26.577 "trsvcid": "49728" 00:18:26.577 }, 00:18:26.577 "auth": { 00:18:26.577 "state": "completed", 00:18:26.577 "digest": "sha384", 00:18:26.577 "dhgroup": "ffdhe4096" 00:18:26.577 } 00:18:26.577 } 00:18:26.577 ]' 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.577 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.838 10:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:18:27.408 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.408 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.408 10:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.408 10:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.408 10:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.408 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.408 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.408 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.409 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.669 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.670 10:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.948 00:18:27.948 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.948 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.948 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.220 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.220 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.220 10:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.221 { 00:18:28.221 "cntlid": 81, 00:18:28.221 "qid": 0, 00:18:28.221 "state": "enabled", 00:18:28.221 "listen_address": { 00:18:28.221 "trtype": "TCP", 00:18:28.221 "adrfam": "IPv4", 00:18:28.221 "traddr": "10.0.0.2", 00:18:28.221 "trsvcid": "4420" 00:18:28.221 }, 00:18:28.221 "peer_address": { 00:18:28.221 "trtype": "TCP", 00:18:28.221 "adrfam": "IPv4", 00:18:28.221 "traddr": "10.0.0.1", 00:18:28.221 "trsvcid": "56640" 00:18:28.221 }, 00:18:28.221 "auth": { 00:18:28.221 "state": "completed", 00:18:28.221 "digest": "sha384", 00:18:28.221 "dhgroup": "ffdhe6144" 00:18:28.221 } 00:18:28.221 } 00:18:28.221 ]' 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.221 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.482 10:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:18:29.054 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.054 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.054 10:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.054 10:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.054 10:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.054 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.054 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.054 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.316 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.576 00:18:29.576 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.576 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.576 10:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.837 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.837 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.837 10:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:29.837 10:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.837 10:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:29.837 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.837 { 00:18:29.837 "cntlid": 83, 00:18:29.837 "qid": 0, 00:18:29.837 "state": "enabled", 00:18:29.837 "listen_address": { 00:18:29.837 "trtype": "TCP", 00:18:29.837 "adrfam": "IPv4", 00:18:29.837 "traddr": "10.0.0.2", 00:18:29.837 "trsvcid": "4420" 00:18:29.837 }, 00:18:29.837 "peer_address": { 00:18:29.837 "trtype": "TCP", 00:18:29.837 "adrfam": "IPv4", 00:18:29.837 "traddr": "10.0.0.1", 00:18:29.837 "trsvcid": "56686" 00:18:29.837 }, 00:18:29.837 "auth": { 00:18:29.837 "state": "completed", 00:18:29.837 "digest": "sha384", 00:18:29.837 "dhgroup": "ffdhe6144" 00:18:29.837 } 00:18:29.837 } 00:18:29.837 ]' 00:18:29.837 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.837 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.837 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.098 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.098 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.098 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.098 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.098 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.098 10:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.041 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.302 00:18:31.302 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.302 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.302 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.612 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.612 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.613 { 00:18:31.613 "cntlid": 85, 00:18:31.613 "qid": 0, 00:18:31.613 "state": "enabled", 00:18:31.613 "listen_address": { 00:18:31.613 "trtype": "TCP", 00:18:31.613 "adrfam": "IPv4", 00:18:31.613 "traddr": "10.0.0.2", 00:18:31.613 "trsvcid": "4420" 00:18:31.613 }, 00:18:31.613 "peer_address": { 00:18:31.613 "trtype": "TCP", 00:18:31.613 "adrfam": "IPv4", 00:18:31.613 "traddr": "10.0.0.1", 00:18:31.613 "trsvcid": "56710" 00:18:31.613 }, 00:18:31.613 "auth": { 00:18:31.613 "state": "completed", 00:18:31.613 "digest": "sha384", 00:18:31.613 "dhgroup": "ffdhe6144" 00:18:31.613 } 00:18:31.613 } 00:18:31.613 ]' 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.613 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.901 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.901 10:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.901 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:18:32.474 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.474 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:32.474 10:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.474 10:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.474 10:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.474 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.474 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:32.474 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.735 10:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:32.996 00:18:32.996 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.996 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.996 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.257 { 00:18:33.257 "cntlid": 87, 00:18:33.257 "qid": 0, 00:18:33.257 "state": "enabled", 00:18:33.257 "listen_address": { 00:18:33.257 "trtype": "TCP", 00:18:33.257 "adrfam": "IPv4", 00:18:33.257 "traddr": "10.0.0.2", 00:18:33.257 "trsvcid": "4420" 00:18:33.257 }, 00:18:33.257 "peer_address": { 00:18:33.257 "trtype": "TCP", 00:18:33.257 "adrfam": "IPv4", 00:18:33.257 "traddr": "10.0.0.1", 00:18:33.257 "trsvcid": "56726" 00:18:33.257 }, 00:18:33.257 "auth": { 00:18:33.257 "state": "completed", 00:18:33.257 "digest": "sha384", 00:18:33.257 "dhgroup": "ffdhe6144" 00:18:33.257 } 00:18:33.257 } 00:18:33.257 ]' 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:33.257 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.518 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.518 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.518 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.518 10:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.459 10:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.029 00:18:35.029 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.030 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.030 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.291 { 00:18:35.291 "cntlid": 89, 00:18:35.291 "qid": 0, 00:18:35.291 "state": "enabled", 00:18:35.291 "listen_address": { 00:18:35.291 "trtype": "TCP", 00:18:35.291 "adrfam": "IPv4", 00:18:35.291 "traddr": "10.0.0.2", 00:18:35.291 "trsvcid": "4420" 00:18:35.291 }, 00:18:35.291 "peer_address": { 00:18:35.291 "trtype": "TCP", 00:18:35.291 "adrfam": "IPv4", 00:18:35.291 "traddr": "10.0.0.1", 00:18:35.291 "trsvcid": "56754" 00:18:35.291 }, 00:18:35.291 "auth": { 00:18:35.291 "state": "completed", 00:18:35.291 "digest": "sha384", 00:18:35.291 "dhgroup": "ffdhe8192" 00:18:35.291 } 00:18:35.291 } 00:18:35.291 ]' 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.291 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.551 10:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:18:36.122 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.122 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.122 10:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.122 10:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.122 10:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.122 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.122 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.122 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.383 10:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.956 00:18:36.956 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.956 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.956 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.956 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.956 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.956 10:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.956 10:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.218 10:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.218 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.218 { 00:18:37.218 "cntlid": 91, 00:18:37.218 "qid": 0, 00:18:37.218 "state": "enabled", 00:18:37.218 "listen_address": { 00:18:37.218 "trtype": "TCP", 00:18:37.218 "adrfam": "IPv4", 00:18:37.218 "traddr": "10.0.0.2", 00:18:37.218 "trsvcid": "4420" 00:18:37.218 }, 00:18:37.218 "peer_address": { 00:18:37.218 "trtype": "TCP", 00:18:37.218 "adrfam": "IPv4", 00:18:37.218 "traddr": "10.0.0.1", 00:18:37.218 "trsvcid": "51184" 00:18:37.218 }, 00:18:37.218 "auth": { 00:18:37.218 "state": "completed", 00:18:37.218 "digest": "sha384", 00:18:37.218 "dhgroup": "ffdhe8192" 00:18:37.218 } 00:18:37.218 } 00:18:37.218 ]' 00:18:37.218 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.218 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.218 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.218 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.218 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.218 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.219 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.219 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.479 10:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:18:38.058 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.058 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.058 10:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.058 10:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.058 10:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.058 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.058 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.058 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.319 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.891 00:18:38.891 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.891 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.891 10:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.891 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.891 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.891 10:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.891 10:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.891 10:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.891 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.891 { 00:18:38.891 "cntlid": 93, 00:18:38.891 "qid": 0, 00:18:38.891 "state": "enabled", 00:18:38.891 "listen_address": { 00:18:38.891 "trtype": "TCP", 00:18:38.891 "adrfam": "IPv4", 00:18:38.891 "traddr": "10.0.0.2", 00:18:38.891 "trsvcid": "4420" 00:18:38.891 }, 00:18:38.891 "peer_address": { 00:18:38.891 "trtype": "TCP", 00:18:38.891 "adrfam": "IPv4", 00:18:38.891 "traddr": "10.0.0.1", 00:18:38.891 "trsvcid": "51208" 00:18:38.891 }, 00:18:38.891 "auth": { 00:18:38.891 "state": "completed", 00:18:38.891 "digest": "sha384", 00:18:38.891 "dhgroup": "ffdhe8192" 00:18:38.891 } 00:18:38.891 } 00:18:38.891 ]' 00:18:38.891 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.891 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.891 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.152 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.152 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.152 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.152 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.152 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.152 10:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.096 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:40.672 00:18:40.672 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.672 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.672 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.672 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.672 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.672 10:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.672 10:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.936 10:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.936 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.936 { 00:18:40.936 "cntlid": 95, 00:18:40.936 "qid": 0, 00:18:40.936 "state": "enabled", 00:18:40.936 "listen_address": { 00:18:40.936 "trtype": "TCP", 00:18:40.936 "adrfam": "IPv4", 00:18:40.936 "traddr": "10.0.0.2", 00:18:40.936 "trsvcid": "4420" 00:18:40.936 }, 00:18:40.936 "peer_address": { 00:18:40.936 "trtype": "TCP", 00:18:40.936 "adrfam": "IPv4", 00:18:40.936 "traddr": "10.0.0.1", 00:18:40.936 "trsvcid": "51240" 00:18:40.936 }, 00:18:40.936 "auth": { 00:18:40.936 "state": "completed", 00:18:40.936 "digest": "sha384", 00:18:40.936 "dhgroup": "ffdhe8192" 00:18:40.936 } 00:18:40.936 } 00:18:40.936 ]' 00:18:40.936 10:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.936 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.936 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.936 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.936 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.936 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.936 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.936 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.197 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.767 10:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.028 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:42.028 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.028 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.028 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.028 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.028 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.028 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.029 10:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.029 10:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.029 10:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.029 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.029 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.290 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.290 { 00:18:42.290 "cntlid": 97, 00:18:42.290 "qid": 0, 00:18:42.290 "state": "enabled", 00:18:42.290 "listen_address": { 00:18:42.290 "trtype": "TCP", 00:18:42.290 "adrfam": "IPv4", 00:18:42.290 "traddr": "10.0.0.2", 00:18:42.290 "trsvcid": "4420" 00:18:42.290 }, 00:18:42.290 "peer_address": { 00:18:42.290 "trtype": "TCP", 00:18:42.290 "adrfam": "IPv4", 00:18:42.290 "traddr": "10.0.0.1", 00:18:42.290 "trsvcid": "51268" 00:18:42.290 }, 00:18:42.290 "auth": { 00:18:42.290 "state": "completed", 00:18:42.290 "digest": "sha512", 00:18:42.290 "dhgroup": "null" 00:18:42.290 } 00:18:42.290 } 00:18:42.290 ]' 00:18:42.290 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.551 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.551 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.551 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:42.551 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.551 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.551 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.551 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.812 10:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:18:43.384 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.384 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:43.384 10:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.384 10:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.384 10:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.384 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.384 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:43.384 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.646 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.906 00:18:43.906 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.906 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.906 10:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.906 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.906 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.906 10:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.906 10:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.906 10:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.906 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.906 { 00:18:43.906 "cntlid": 99, 00:18:43.906 "qid": 0, 00:18:43.906 "state": "enabled", 00:18:43.906 "listen_address": { 00:18:43.906 "trtype": "TCP", 00:18:43.906 "adrfam": "IPv4", 00:18:43.906 "traddr": "10.0.0.2", 00:18:43.906 "trsvcid": "4420" 00:18:43.906 }, 00:18:43.906 "peer_address": { 00:18:43.906 "trtype": "TCP", 00:18:43.906 "adrfam": "IPv4", 00:18:43.906 "traddr": "10.0.0.1", 00:18:43.906 "trsvcid": "51288" 00:18:43.906 }, 00:18:43.906 "auth": { 00:18:43.906 "state": "completed", 00:18:43.906 "digest": "sha512", 00:18:43.906 "dhgroup": "null" 00:18:43.906 } 00:18:43.906 } 00:18:43.906 ]' 00:18:43.906 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.906 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.906 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.166 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:44.166 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.166 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.166 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.166 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.166 10:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.108 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.369 00:18:45.369 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.369 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.369 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.630 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.630 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.630 10:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.630 10:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.630 10:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.630 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.630 { 00:18:45.630 "cntlid": 101, 00:18:45.630 "qid": 0, 00:18:45.630 "state": "enabled", 00:18:45.631 "listen_address": { 00:18:45.631 "trtype": "TCP", 00:18:45.631 "adrfam": "IPv4", 00:18:45.631 "traddr": "10.0.0.2", 00:18:45.631 "trsvcid": "4420" 00:18:45.631 }, 00:18:45.631 "peer_address": { 00:18:45.631 "trtype": "TCP", 00:18:45.631 "adrfam": "IPv4", 00:18:45.631 "traddr": "10.0.0.1", 00:18:45.631 "trsvcid": "51320" 00:18:45.631 }, 00:18:45.631 "auth": { 00:18:45.631 "state": "completed", 00:18:45.631 "digest": "sha512", 00:18:45.631 "dhgroup": "null" 00:18:45.631 } 00:18:45.631 } 00:18:45.631 ]' 00:18:45.631 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.631 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.631 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.631 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.631 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.631 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.631 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.631 10:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.891 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:18:46.460 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.721 10:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.981 00:18:46.981 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.981 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.981 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.241 { 00:18:47.241 "cntlid": 103, 00:18:47.241 "qid": 0, 00:18:47.241 "state": "enabled", 00:18:47.241 "listen_address": { 00:18:47.241 "trtype": "TCP", 00:18:47.241 "adrfam": "IPv4", 00:18:47.241 "traddr": "10.0.0.2", 00:18:47.241 "trsvcid": "4420" 00:18:47.241 }, 00:18:47.241 "peer_address": { 00:18:47.241 "trtype": "TCP", 00:18:47.241 "adrfam": "IPv4", 00:18:47.241 "traddr": "10.0.0.1", 00:18:47.241 "trsvcid": "45342" 00:18:47.241 }, 00:18:47.241 "auth": { 00:18:47.241 "state": "completed", 00:18:47.241 "digest": "sha512", 00:18:47.241 "dhgroup": "null" 00:18:47.241 } 00:18:47.241 } 00:18:47.241 ]' 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.241 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.501 10:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:18:48.071 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.071 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.071 10:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.071 10:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.071 10:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.071 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.071 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.071 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.071 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.331 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.592 00:18:48.592 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.592 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.592 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.853 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.853 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.853 10:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.853 10:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.853 10:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.853 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.853 { 00:18:48.853 "cntlid": 105, 00:18:48.853 "qid": 0, 00:18:48.853 "state": "enabled", 00:18:48.853 "listen_address": { 00:18:48.853 "trtype": "TCP", 00:18:48.853 "adrfam": "IPv4", 00:18:48.853 "traddr": "10.0.0.2", 00:18:48.853 "trsvcid": "4420" 00:18:48.853 }, 00:18:48.853 "peer_address": { 00:18:48.853 "trtype": "TCP", 00:18:48.853 "adrfam": "IPv4", 00:18:48.853 "traddr": "10.0.0.1", 00:18:48.853 "trsvcid": "45376" 00:18:48.853 }, 00:18:48.853 "auth": { 00:18:48.853 "state": "completed", 00:18:48.853 "digest": "sha512", 00:18:48.853 "dhgroup": "ffdhe2048" 00:18:48.853 } 00:18:48.853 } 00:18:48.853 ]' 00:18:48.853 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.853 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.853 10:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.853 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.853 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.853 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.853 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.853 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.114 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:18:49.683 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.683 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.684 10:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.684 10:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.684 10:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.684 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.684 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:49.684 10:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.944 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.204 00:18:50.204 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.204 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.204 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.204 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.204 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.204 10:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.204 10:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.465 { 00:18:50.465 "cntlid": 107, 00:18:50.465 "qid": 0, 00:18:50.465 "state": "enabled", 00:18:50.465 "listen_address": { 00:18:50.465 "trtype": "TCP", 00:18:50.465 "adrfam": "IPv4", 00:18:50.465 "traddr": "10.0.0.2", 00:18:50.465 "trsvcid": "4420" 00:18:50.465 }, 00:18:50.465 "peer_address": { 00:18:50.465 "trtype": "TCP", 00:18:50.465 "adrfam": "IPv4", 00:18:50.465 "traddr": "10.0.0.1", 00:18:50.465 "trsvcid": "45400" 00:18:50.465 }, 00:18:50.465 "auth": { 00:18:50.465 "state": "completed", 00:18:50.465 "digest": "sha512", 00:18:50.465 "dhgroup": "ffdhe2048" 00:18:50.465 } 00:18:50.465 } 00:18:50.465 ]' 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.465 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.725 10:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:18:51.295 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.295 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.295 10:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.295 10:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.295 10:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.295 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.295 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.296 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.556 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.816 00:18:51.816 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.816 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.816 10:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.816 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.816 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.816 10:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.816 10:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.816 10:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.816 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.816 { 00:18:51.816 "cntlid": 109, 00:18:51.816 "qid": 0, 00:18:51.816 "state": "enabled", 00:18:51.816 "listen_address": { 00:18:51.816 "trtype": "TCP", 00:18:51.816 "adrfam": "IPv4", 00:18:51.816 "traddr": "10.0.0.2", 00:18:51.816 "trsvcid": "4420" 00:18:51.816 }, 00:18:51.816 "peer_address": { 00:18:51.816 "trtype": "TCP", 00:18:51.816 "adrfam": "IPv4", 00:18:51.816 "traddr": "10.0.0.1", 00:18:51.816 "trsvcid": "45416" 00:18:51.816 }, 00:18:51.816 "auth": { 00:18:51.816 "state": "completed", 00:18:51.816 "digest": "sha512", 00:18:51.816 "dhgroup": "ffdhe2048" 00:18:51.816 } 00:18:51.816 } 00:18:51.816 ]' 00:18:51.816 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.076 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.076 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.076 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.076 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.076 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.076 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.076 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.337 10:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:18:52.909 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.909 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.909 10:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.909 10:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.909 10:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.909 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.909 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.909 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.169 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.430 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.430 { 00:18:53.430 "cntlid": 111, 00:18:53.430 "qid": 0, 00:18:53.430 "state": "enabled", 00:18:53.430 "listen_address": { 00:18:53.430 "trtype": "TCP", 00:18:53.430 "adrfam": "IPv4", 00:18:53.430 "traddr": "10.0.0.2", 00:18:53.430 "trsvcid": "4420" 00:18:53.430 }, 00:18:53.430 "peer_address": { 00:18:53.430 "trtype": "TCP", 00:18:53.430 "adrfam": "IPv4", 00:18:53.430 "traddr": "10.0.0.1", 00:18:53.430 "trsvcid": "45426" 00:18:53.430 }, 00:18:53.430 "auth": { 00:18:53.430 "state": "completed", 00:18:53.430 "digest": "sha512", 00:18:53.430 "dhgroup": "ffdhe2048" 00:18:53.430 } 00:18:53.430 } 00:18:53.430 ]' 00:18:53.430 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.690 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.690 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.690 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.690 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.690 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.690 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.690 10:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.950 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:18:54.522 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.522 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:54.522 10:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.522 10:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.522 10:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.522 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.522 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.522 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:54.522 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.782 10:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.782 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.043 { 00:18:55.043 "cntlid": 113, 00:18:55.043 "qid": 0, 00:18:55.043 "state": "enabled", 00:18:55.043 "listen_address": { 00:18:55.043 "trtype": "TCP", 00:18:55.043 "adrfam": "IPv4", 00:18:55.043 "traddr": "10.0.0.2", 00:18:55.043 "trsvcid": "4420" 00:18:55.043 }, 00:18:55.043 "peer_address": { 00:18:55.043 "trtype": "TCP", 00:18:55.043 "adrfam": "IPv4", 00:18:55.043 "traddr": "10.0.0.1", 00:18:55.043 "trsvcid": "45462" 00:18:55.043 }, 00:18:55.043 "auth": { 00:18:55.043 "state": "completed", 00:18:55.043 "digest": "sha512", 00:18:55.043 "dhgroup": "ffdhe3072" 00:18:55.043 } 00:18:55.043 } 00:18:55.043 ]' 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.043 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.303 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.303 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.303 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.303 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.303 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.303 10:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.245 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.505 00:18:56.505 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.505 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.505 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.766 { 00:18:56.766 "cntlid": 115, 00:18:56.766 "qid": 0, 00:18:56.766 "state": "enabled", 00:18:56.766 "listen_address": { 00:18:56.766 "trtype": "TCP", 00:18:56.766 "adrfam": "IPv4", 00:18:56.766 "traddr": "10.0.0.2", 00:18:56.766 "trsvcid": "4420" 00:18:56.766 }, 00:18:56.766 "peer_address": { 00:18:56.766 "trtype": "TCP", 00:18:56.766 "adrfam": "IPv4", 00:18:56.766 "traddr": "10.0.0.1", 00:18:56.766 "trsvcid": "35218" 00:18:56.766 }, 00:18:56.766 "auth": { 00:18:56.766 "state": "completed", 00:18:56.766 "digest": "sha512", 00:18:56.766 "dhgroup": "ffdhe3072" 00:18:56.766 } 00:18:56.766 } 00:18:56.766 ]' 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.766 10:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.027 10:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:18:57.597 10:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.597 10:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:57.597 10:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.597 10:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.597 10:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.597 10:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.597 10:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:57.597 10:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.858 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.859 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.119 00:18:58.119 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.119 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.119 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.382 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.382 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.382 10:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.382 10:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.382 10:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.382 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.382 { 00:18:58.382 "cntlid": 117, 00:18:58.382 "qid": 0, 00:18:58.382 "state": "enabled", 00:18:58.382 "listen_address": { 00:18:58.382 "trtype": "TCP", 00:18:58.382 "adrfam": "IPv4", 00:18:58.382 "traddr": "10.0.0.2", 00:18:58.382 "trsvcid": "4420" 00:18:58.382 }, 00:18:58.382 "peer_address": { 00:18:58.382 "trtype": "TCP", 00:18:58.382 "adrfam": "IPv4", 00:18:58.382 "traddr": "10.0.0.1", 00:18:58.382 "trsvcid": "35244" 00:18:58.382 }, 00:18:58.382 "auth": { 00:18:58.382 "state": "completed", 00:18:58.382 "digest": "sha512", 00:18:58.382 "dhgroup": "ffdhe3072" 00:18:58.382 } 00:18:58.382 } 00:18:58.382 ]' 00:18:58.383 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.383 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.383 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.383 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.383 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.383 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.383 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.383 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.674 10:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:18:59.272 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.272 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.272 10:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.272 10:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.272 10:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.272 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.272 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.272 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.533 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.794 00:18:59.794 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.794 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.794 10:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.794 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.794 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.794 10:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.794 10:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.794 10:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.794 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.794 { 00:18:59.794 "cntlid": 119, 00:18:59.794 "qid": 0, 00:18:59.794 "state": "enabled", 00:18:59.794 "listen_address": { 00:18:59.794 "trtype": "TCP", 00:18:59.794 "adrfam": "IPv4", 00:18:59.794 "traddr": "10.0.0.2", 00:18:59.794 "trsvcid": "4420" 00:18:59.794 }, 00:18:59.794 "peer_address": { 00:18:59.794 "trtype": "TCP", 00:18:59.794 "adrfam": "IPv4", 00:18:59.794 "traddr": "10.0.0.1", 00:18:59.794 "trsvcid": "35284" 00:18:59.794 }, 00:18:59.794 "auth": { 00:18:59.794 "state": "completed", 00:18:59.794 "digest": "sha512", 00:18:59.794 "dhgroup": "ffdhe3072" 00:18:59.794 } 00:18:59.794 } 00:18:59.794 ]' 00:18:59.794 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.794 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.794 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.054 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.054 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.054 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.054 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.054 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.054 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:19:00.996 10:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.996 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.997 10:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.997 10:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.997 10:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.997 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.997 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.257 00:19:01.257 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.257 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.257 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.518 { 00:19:01.518 "cntlid": 121, 00:19:01.518 "qid": 0, 00:19:01.518 "state": "enabled", 00:19:01.518 "listen_address": { 00:19:01.518 "trtype": "TCP", 00:19:01.518 "adrfam": "IPv4", 00:19:01.518 "traddr": "10.0.0.2", 00:19:01.518 "trsvcid": "4420" 00:19:01.518 }, 00:19:01.518 "peer_address": { 00:19:01.518 "trtype": "TCP", 00:19:01.518 "adrfam": "IPv4", 00:19:01.518 "traddr": "10.0.0.1", 00:19:01.518 "trsvcid": "35304" 00:19:01.518 }, 00:19:01.518 "auth": { 00:19:01.518 "state": "completed", 00:19:01.518 "digest": "sha512", 00:19:01.518 "dhgroup": "ffdhe4096" 00:19:01.518 } 00:19:01.518 } 00:19:01.518 ]' 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.518 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.778 10:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:19:02.349 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.349 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.349 10:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.349 10:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.349 10:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.349 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.349 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:02.349 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.610 10:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.871 00:19:02.871 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.871 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.871 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.132 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.133 { 00:19:03.133 "cntlid": 123, 00:19:03.133 "qid": 0, 00:19:03.133 "state": "enabled", 00:19:03.133 "listen_address": { 00:19:03.133 "trtype": "TCP", 00:19:03.133 "adrfam": "IPv4", 00:19:03.133 "traddr": "10.0.0.2", 00:19:03.133 "trsvcid": "4420" 00:19:03.133 }, 00:19:03.133 "peer_address": { 00:19:03.133 "trtype": "TCP", 00:19:03.133 "adrfam": "IPv4", 00:19:03.133 "traddr": "10.0.0.1", 00:19:03.133 "trsvcid": "35338" 00:19:03.133 }, 00:19:03.133 "auth": { 00:19:03.133 "state": "completed", 00:19:03.133 "digest": "sha512", 00:19:03.133 "dhgroup": "ffdhe4096" 00:19:03.133 } 00:19:03.133 } 00:19:03.133 ]' 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.133 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.394 10:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:19:03.964 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.964 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:03.965 10:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.965 10:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.965 10:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.965 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.965 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:03.965 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:04.225 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:04.225 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.225 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.225 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:04.225 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.225 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.226 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.226 10:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.226 10:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.226 10:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.226 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.226 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.487 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.487 { 00:19:04.487 "cntlid": 125, 00:19:04.487 "qid": 0, 00:19:04.487 "state": "enabled", 00:19:04.487 "listen_address": { 00:19:04.487 "trtype": "TCP", 00:19:04.487 "adrfam": "IPv4", 00:19:04.487 "traddr": "10.0.0.2", 00:19:04.487 "trsvcid": "4420" 00:19:04.487 }, 00:19:04.487 "peer_address": { 00:19:04.487 "trtype": "TCP", 00:19:04.487 "adrfam": "IPv4", 00:19:04.487 "traddr": "10.0.0.1", 00:19:04.487 "trsvcid": "35364" 00:19:04.487 }, 00:19:04.487 "auth": { 00:19:04.487 "state": "completed", 00:19:04.487 "digest": "sha512", 00:19:04.487 "dhgroup": "ffdhe4096" 00:19:04.487 } 00:19:04.487 } 00:19:04.487 ]' 00:19:04.487 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.748 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.748 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.748 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.748 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.748 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.748 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.748 10:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.009 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:19:05.580 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.580 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.580 10:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.580 10:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.580 10:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.580 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.580 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:05.580 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.840 10:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.103 00:19:06.103 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.103 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.103 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.103 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.103 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.103 10:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.103 10:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.103 10:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.103 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.103 { 00:19:06.103 "cntlid": 127, 00:19:06.103 "qid": 0, 00:19:06.103 "state": "enabled", 00:19:06.103 "listen_address": { 00:19:06.103 "trtype": "TCP", 00:19:06.103 "adrfam": "IPv4", 00:19:06.103 "traddr": "10.0.0.2", 00:19:06.103 "trsvcid": "4420" 00:19:06.103 }, 00:19:06.103 "peer_address": { 00:19:06.103 "trtype": "TCP", 00:19:06.103 "adrfam": "IPv4", 00:19:06.103 "traddr": "10.0.0.1", 00:19:06.103 "trsvcid": "35406" 00:19:06.103 }, 00:19:06.103 "auth": { 00:19:06.103 "state": "completed", 00:19:06.103 "digest": "sha512", 00:19:06.103 "dhgroup": "ffdhe4096" 00:19:06.103 } 00:19:06.103 } 00:19:06.103 ]' 00:19:06.369 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.369 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.369 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.369 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.369 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.369 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.369 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.369 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.629 10:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:19:07.201 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.201 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.201 10:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.201 10:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.201 10:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.201 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.201 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.201 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:07.201 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.461 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.722 00:19:07.722 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.722 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.722 10:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.983 { 00:19:07.983 "cntlid": 129, 00:19:07.983 "qid": 0, 00:19:07.983 "state": "enabled", 00:19:07.983 "listen_address": { 00:19:07.983 "trtype": "TCP", 00:19:07.983 "adrfam": "IPv4", 00:19:07.983 "traddr": "10.0.0.2", 00:19:07.983 "trsvcid": "4420" 00:19:07.983 }, 00:19:07.983 "peer_address": { 00:19:07.983 "trtype": "TCP", 00:19:07.983 "adrfam": "IPv4", 00:19:07.983 "traddr": "10.0.0.1", 00:19:07.983 "trsvcid": "40422" 00:19:07.983 }, 00:19:07.983 "auth": { 00:19:07.983 "state": "completed", 00:19:07.983 "digest": "sha512", 00:19:07.983 "dhgroup": "ffdhe6144" 00:19:07.983 } 00:19:07.983 } 00:19:07.983 ]' 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.983 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.245 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:19:08.815 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.815 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.815 10:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.815 10:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.815 10:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.815 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.815 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.815 10:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.815 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.386 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.386 { 00:19:09.386 "cntlid": 131, 00:19:09.386 "qid": 0, 00:19:09.386 "state": "enabled", 00:19:09.386 "listen_address": { 00:19:09.386 "trtype": "TCP", 00:19:09.386 "adrfam": "IPv4", 00:19:09.386 "traddr": "10.0.0.2", 00:19:09.386 "trsvcid": "4420" 00:19:09.386 }, 00:19:09.386 "peer_address": { 00:19:09.386 "trtype": "TCP", 00:19:09.386 "adrfam": "IPv4", 00:19:09.386 "traddr": "10.0.0.1", 00:19:09.386 "trsvcid": "40448" 00:19:09.386 }, 00:19:09.386 "auth": { 00:19:09.386 "state": "completed", 00:19:09.386 "digest": "sha512", 00:19:09.386 "dhgroup": "ffdhe6144" 00:19:09.386 } 00:19:09.386 } 00:19:09.386 ]' 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.386 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.647 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:09.647 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.647 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.647 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.647 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.647 10:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:19:10.220 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.220 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:10.220 10:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.220 10:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.481 10:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.481 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.481 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.481 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:10.481 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:10.481 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.482 10:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.742 00:19:11.002 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.003 { 00:19:11.003 "cntlid": 133, 00:19:11.003 "qid": 0, 00:19:11.003 "state": "enabled", 00:19:11.003 "listen_address": { 00:19:11.003 "trtype": "TCP", 00:19:11.003 "adrfam": "IPv4", 00:19:11.003 "traddr": "10.0.0.2", 00:19:11.003 "trsvcid": "4420" 00:19:11.003 }, 00:19:11.003 "peer_address": { 00:19:11.003 "trtype": "TCP", 00:19:11.003 "adrfam": "IPv4", 00:19:11.003 "traddr": "10.0.0.1", 00:19:11.003 "trsvcid": "40484" 00:19:11.003 }, 00:19:11.003 "auth": { 00:19:11.003 "state": "completed", 00:19:11.003 "digest": "sha512", 00:19:11.003 "dhgroup": "ffdhe6144" 00:19:11.003 } 00:19:11.003 } 00:19:11.003 ]' 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.003 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.263 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.263 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.263 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.263 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.263 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.263 10:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:19:11.835 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.835 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:11.835 10:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.835 10:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.835 10:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.835 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.835 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:11.835 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.095 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.355 00:19:12.355 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.355 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.355 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.615 { 00:19:12.615 "cntlid": 135, 00:19:12.615 "qid": 0, 00:19:12.615 "state": "enabled", 00:19:12.615 "listen_address": { 00:19:12.615 "trtype": "TCP", 00:19:12.615 "adrfam": "IPv4", 00:19:12.615 "traddr": "10.0.0.2", 00:19:12.615 "trsvcid": "4420" 00:19:12.615 }, 00:19:12.615 "peer_address": { 00:19:12.615 "trtype": "TCP", 00:19:12.615 "adrfam": "IPv4", 00:19:12.615 "traddr": "10.0.0.1", 00:19:12.615 "trsvcid": "40500" 00:19:12.615 }, 00:19:12.615 "auth": { 00:19:12.615 "state": "completed", 00:19:12.615 "digest": "sha512", 00:19:12.615 "dhgroup": "ffdhe6144" 00:19:12.615 } 00:19:12.615 } 00:19:12.615 ]' 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:12.615 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.875 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.875 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.875 10:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.875 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.817 10:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.389 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.389 { 00:19:14.389 "cntlid": 137, 00:19:14.389 "qid": 0, 00:19:14.389 "state": "enabled", 00:19:14.389 "listen_address": { 00:19:14.389 "trtype": "TCP", 00:19:14.389 "adrfam": "IPv4", 00:19:14.389 "traddr": "10.0.0.2", 00:19:14.389 "trsvcid": "4420" 00:19:14.389 }, 00:19:14.389 "peer_address": { 00:19:14.389 "trtype": "TCP", 00:19:14.389 "adrfam": "IPv4", 00:19:14.389 "traddr": "10.0.0.1", 00:19:14.389 "trsvcid": "40536" 00:19:14.389 }, 00:19:14.389 "auth": { 00:19:14.389 "state": "completed", 00:19:14.389 "digest": "sha512", 00:19:14.389 "dhgroup": "ffdhe8192" 00:19:14.389 } 00:19:14.389 } 00:19:14.389 ]' 00:19:14.389 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.649 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.649 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.649 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.649 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.649 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.649 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.649 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.909 10:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:19:15.479 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.479 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.479 10:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.479 10:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.479 10:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.480 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.480 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.480 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.739 10:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.310 00:19:16.310 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.310 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.310 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.310 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.310 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.310 10:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.310 10:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.310 10:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.310 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.310 { 00:19:16.310 "cntlid": 139, 00:19:16.310 "qid": 0, 00:19:16.310 "state": "enabled", 00:19:16.310 "listen_address": { 00:19:16.310 "trtype": "TCP", 00:19:16.310 "adrfam": "IPv4", 00:19:16.310 "traddr": "10.0.0.2", 00:19:16.310 "trsvcid": "4420" 00:19:16.310 }, 00:19:16.311 "peer_address": { 00:19:16.311 "trtype": "TCP", 00:19:16.311 "adrfam": "IPv4", 00:19:16.311 "traddr": "10.0.0.1", 00:19:16.311 "trsvcid": "40564" 00:19:16.311 }, 00:19:16.311 "auth": { 00:19:16.311 "state": "completed", 00:19:16.311 "digest": "sha512", 00:19:16.311 "dhgroup": "ffdhe8192" 00:19:16.311 } 00:19:16.311 } 00:19:16.311 ]' 00:19:16.311 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.311 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.311 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.572 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.572 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.572 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.572 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.572 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.572 10:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:YmEyNjgwMzZjZTg5ODcwZDBhNWZjOGNlOGY2MGVmZWTj5lnq: --dhchap-ctrl-secret DHHC-1:02:MTZlNjk0MTA2NGJjOWRiOGJmZGM2ODk0YWU1OTE4MjM2ODJjMWViZmQyZTRiYmZmr9e5tQ==: 00:19:17.143 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.143 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:17.143 10:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.143 10:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.143 10:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.143 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.143 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:17.143 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.404 10:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.405 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.405 10:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.976 00:19:17.976 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.976 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.976 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.236 { 00:19:18.236 "cntlid": 141, 00:19:18.236 "qid": 0, 00:19:18.236 "state": "enabled", 00:19:18.236 "listen_address": { 00:19:18.236 "trtype": "TCP", 00:19:18.236 "adrfam": "IPv4", 00:19:18.236 "traddr": "10.0.0.2", 00:19:18.236 "trsvcid": "4420" 00:19:18.236 }, 00:19:18.236 "peer_address": { 00:19:18.236 "trtype": "TCP", 00:19:18.236 "adrfam": "IPv4", 00:19:18.236 "traddr": "10.0.0.1", 00:19:18.236 "trsvcid": "36702" 00:19:18.236 }, 00:19:18.236 "auth": { 00:19:18.236 "state": "completed", 00:19:18.236 "digest": "sha512", 00:19:18.236 "dhgroup": "ffdhe8192" 00:19:18.236 } 00:19:18.236 } 00:19:18.236 ]' 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.236 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.497 10:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:ZTIwMmY5MjNjMjUwOWRkZTE5ODU1MzliMTVjYTg2MjhiNTRmNzBiNDQ2NDdiYjZjqlA85g==: --dhchap-ctrl-secret DHHC-1:01:OTM3NTdlMGZlMzIzZDM5YzJiYTBmMjY5YjNiZTUxNWbBSYh7: 00:19:19.067 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.067 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:19.067 10:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.067 10:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.067 10:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.067 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.067 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.067 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.328 10:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.900 00:19:19.900 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.900 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.900 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.161 { 00:19:20.161 "cntlid": 143, 00:19:20.161 "qid": 0, 00:19:20.161 "state": "enabled", 00:19:20.161 "listen_address": { 00:19:20.161 "trtype": "TCP", 00:19:20.161 "adrfam": "IPv4", 00:19:20.161 "traddr": "10.0.0.2", 00:19:20.161 "trsvcid": "4420" 00:19:20.161 }, 00:19:20.161 "peer_address": { 00:19:20.161 "trtype": "TCP", 00:19:20.161 "adrfam": "IPv4", 00:19:20.161 "traddr": "10.0.0.1", 00:19:20.161 "trsvcid": "36718" 00:19:20.161 }, 00:19:20.161 "auth": { 00:19:20.161 "state": "completed", 00:19:20.161 "digest": "sha512", 00:19:20.161 "dhgroup": "ffdhe8192" 00:19:20.161 } 00:19:20.161 } 00:19:20.161 ]' 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.161 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.422 10:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.993 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.563 00:19:21.563 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.563 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.563 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.825 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.825 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.825 10:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.825 10:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.825 10:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.825 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.825 { 00:19:21.825 "cntlid": 145, 00:19:21.825 "qid": 0, 00:19:21.825 "state": "enabled", 00:19:21.825 "listen_address": { 00:19:21.825 "trtype": "TCP", 00:19:21.825 "adrfam": "IPv4", 00:19:21.825 "traddr": "10.0.0.2", 00:19:21.825 "trsvcid": "4420" 00:19:21.825 }, 00:19:21.825 "peer_address": { 00:19:21.825 "trtype": "TCP", 00:19:21.825 "adrfam": "IPv4", 00:19:21.825 "traddr": "10.0.0.1", 00:19:21.825 "trsvcid": "36750" 00:19:21.825 }, 00:19:21.825 "auth": { 00:19:21.825 "state": "completed", 00:19:21.825 "digest": "sha512", 00:19:21.825 "dhgroup": "ffdhe8192" 00:19:21.825 } 00:19:21.825 } 00:19:21.825 ]' 00:19:21.825 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.825 10:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.825 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.825 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.825 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.825 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.825 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.825 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.086 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTJlMmQ1Zjg4ODcyNDRmNzQwNzNlNTg0YzJhMWExYWM0MGRlYzk3YmIwNjVkNzJhYnyJog==: --dhchap-ctrl-secret DHHC-1:03:ZWQwZTkzYzU5YzFhMWE5ZjQwMTA5Yjg5NDA1NzI2NzU2MWY3MjA3NDQ2NmU2YTIyMDMzZTdhYjcxMTAxMWJlMjniCls=: 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:22.656 10:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:23.227 request: 00:19:23.227 { 00:19:23.227 "name": "nvme0", 00:19:23.227 "trtype": "tcp", 00:19:23.227 "traddr": "10.0.0.2", 00:19:23.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:23.227 "adrfam": "ipv4", 00:19:23.227 "trsvcid": "4420", 00:19:23.227 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.227 "dhchap_key": "key2", 00:19:23.227 "method": "bdev_nvme_attach_controller", 00:19:23.227 "req_id": 1 00:19:23.227 } 00:19:23.227 Got JSON-RPC error response 00:19:23.227 response: 00:19:23.227 { 00:19:23.227 "code": -5, 00:19:23.227 "message": "Input/output error" 00:19:23.227 } 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.227 10:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:23.799 request: 00:19:23.799 { 00:19:23.799 "name": "nvme0", 00:19:23.799 "trtype": "tcp", 00:19:23.799 "traddr": "10.0.0.2", 00:19:23.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:23.799 "adrfam": "ipv4", 00:19:23.799 "trsvcid": "4420", 00:19:23.799 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:23.799 "dhchap_key": "key1", 00:19:23.799 "dhchap_ctrlr_key": "ckey2", 00:19:23.799 "method": "bdev_nvme_attach_controller", 00:19:23.799 "req_id": 1 00:19:23.799 } 00:19:23.799 Got JSON-RPC error response 00:19:23.799 response: 00:19:23.799 { 00:19:23.799 "code": -5, 00:19:23.799 "message": "Input/output error" 00:19:23.799 } 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.799 10:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.060 request: 00:19:24.060 { 00:19:24.060 "name": "nvme0", 00:19:24.060 "trtype": "tcp", 00:19:24.060 "traddr": "10.0.0.2", 00:19:24.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:24.060 "adrfam": "ipv4", 00:19:24.060 "trsvcid": "4420", 00:19:24.060 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:24.060 "dhchap_key": "key1", 00:19:24.060 "dhchap_ctrlr_key": "ckey1", 00:19:24.060 "method": "bdev_nvme_attach_controller", 00:19:24.060 "req_id": 1 00:19:24.060 } 00:19:24.060 Got JSON-RPC error response 00:19:24.060 response: 00:19:24.060 { 00:19:24.060 "code": -5, 00:19:24.060 "message": "Input/output error" 00:19:24.060 } 00:19:24.060 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:24.060 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:24.060 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:24.060 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:24.060 10:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:24.060 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.060 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 827507 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 827507 ']' 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 827507 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 827507 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 827507' 00:19:24.321 killing process with pid 827507 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 827507 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 827507 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=852909 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 852909 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 852909 ']' 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:24.321 10:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 852909 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 852909 ']' 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:25.263 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.615 10:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.187 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.187 { 00:19:26.187 "cntlid": 1, 00:19:26.187 "qid": 0, 00:19:26.187 "state": "enabled", 00:19:26.187 "listen_address": { 00:19:26.187 "trtype": "TCP", 00:19:26.187 "adrfam": "IPv4", 00:19:26.187 "traddr": "10.0.0.2", 00:19:26.187 "trsvcid": "4420" 00:19:26.187 }, 00:19:26.187 "peer_address": { 00:19:26.187 "trtype": "TCP", 00:19:26.187 "adrfam": "IPv4", 00:19:26.187 "traddr": "10.0.0.1", 00:19:26.187 "trsvcid": "36808" 00:19:26.187 }, 00:19:26.187 "auth": { 00:19:26.187 "state": "completed", 00:19:26.187 "digest": "sha512", 00:19:26.187 "dhgroup": "ffdhe8192" 00:19:26.187 } 00:19:26.187 } 00:19:26.187 ]' 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.187 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.448 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.448 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.448 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.448 10:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OTc2ODk3OTc5NzBkZmE4MmY0OGY2MGFmNjY4ZDJiOWFjOGRiNDY1YzdmYmNhNjYxN2Y4OGE4OWM1ODhiMWU3ZReWLI4=: 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:27.020 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:27.280 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.281 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:27.281 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.281 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:27.281 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:27.281 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:27.281 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:27.281 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.281 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.281 request: 00:19:27.281 { 00:19:27.281 "name": "nvme0", 00:19:27.281 "trtype": "tcp", 00:19:27.281 "traddr": "10.0.0.2", 00:19:27.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:27.281 "adrfam": "ipv4", 00:19:27.281 "trsvcid": "4420", 00:19:27.281 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:27.281 "dhchap_key": "key3", 00:19:27.281 "method": "bdev_nvme_attach_controller", 00:19:27.281 "req_id": 1 00:19:27.281 } 00:19:27.281 Got JSON-RPC error response 00:19:27.281 response: 00:19:27.281 { 00:19:27.281 "code": -5, 00:19:27.281 "message": "Input/output error" 00:19:27.281 } 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.541 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.802 request: 00:19:27.802 { 00:19:27.802 "name": "nvme0", 00:19:27.802 "trtype": "tcp", 00:19:27.802 "traddr": "10.0.0.2", 00:19:27.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:27.802 "adrfam": "ipv4", 00:19:27.802 "trsvcid": "4420", 00:19:27.802 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:27.802 "dhchap_key": "key3", 00:19:27.802 "method": "bdev_nvme_attach_controller", 00:19:27.802 "req_id": 1 00:19:27.802 } 00:19:27.802 Got JSON-RPC error response 00:19:27.802 response: 00:19:27.802 { 00:19:27.802 "code": -5, 00:19:27.802 "message": "Input/output error" 00:19:27.802 } 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:27.802 10:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:27.802 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:27.802 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.802 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.802 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.802 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:27.802 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.802 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:28.062 request: 00:19:28.062 { 00:19:28.062 "name": "nvme0", 00:19:28.062 "trtype": "tcp", 00:19:28.062 "traddr": "10.0.0.2", 00:19:28.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:28.062 "adrfam": "ipv4", 00:19:28.062 "trsvcid": "4420", 00:19:28.062 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:28.062 "dhchap_key": "key0", 00:19:28.062 "dhchap_ctrlr_key": "key1", 00:19:28.062 "method": "bdev_nvme_attach_controller", 00:19:28.062 "req_id": 1 00:19:28.062 } 00:19:28.062 Got JSON-RPC error response 00:19:28.062 response: 00:19:28.062 { 00:19:28.062 "code": -5, 00:19:28.062 "message": "Input/output error" 00:19:28.062 } 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:28.062 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:28.321 00:19:28.321 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:28.321 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.321 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 827528 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 827528 ']' 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 827528 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 827528 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 827528' 00:19:28.581 killing process with pid 827528 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 827528 00:19:28.581 10:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 827528 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:28.841 rmmod nvme_tcp 00:19:28.841 rmmod nvme_fabrics 00:19:28.841 rmmod nvme_keyring 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 852909 ']' 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 852909 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 852909 ']' 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 852909 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:28.841 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 852909 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 852909' 00:19:29.102 killing process with pid 852909 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 852909 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 852909 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.102 10:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.648 10:44:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:31.648 10:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DCQ /tmp/spdk.key-sha256.0JE /tmp/spdk.key-sha384.R70 /tmp/spdk.key-sha512.juS /tmp/spdk.key-sha512.9ia /tmp/spdk.key-sha384.MCm /tmp/spdk.key-sha256.ana '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:31.648 00:19:31.648 real 2m18.852s 00:19:31.648 user 5m9.694s 00:19:31.648 sys 0m18.930s 00:19:31.648 10:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:31.648 10:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.648 ************************************ 00:19:31.648 END TEST nvmf_auth_target 00:19:31.648 ************************************ 00:19:31.648 10:44:55 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:31.648 10:44:55 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:31.648 10:44:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:19:31.648 10:44:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:31.648 10:44:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:31.648 ************************************ 00:19:31.648 START TEST nvmf_bdevio_no_huge 00:19:31.648 ************************************ 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:31.648 * Looking for test storage... 00:19:31.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:31.648 10:44:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:38.237 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:38.237 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:38.237 Found net devices under 0000:31:00.0: cvl_0_0 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:38.237 Found net devices under 0000:31:00.1: cvl_0_1 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.237 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:38.238 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.238 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.238 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:38.238 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:38.238 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.238 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.498 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.498 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.498 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:38.498 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.498 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.498 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.498 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:38.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.833 ms 00:19:38.498 00:19:38.498 --- 10.0.0.2 ping statistics --- 00:19:38.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.498 rtt min/avg/max/mdev = 0.833/0.833/0.833/0.000 ms 00:19:38.498 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:19:38.758 00:19:38.758 --- 10.0.0.1 ping statistics --- 00:19:38.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.758 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=858133 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 858133 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 858133 ']' 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:38.758 10:45:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:38.758 [2024-06-10 10:45:02.882241] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:19:38.758 [2024-06-10 10:45:02.882326] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:38.758 [2024-06-10 10:45:02.978756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:39.018 [2024-06-10 10:45:03.082593] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.018 [2024-06-10 10:45:03.082642] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.018 [2024-06-10 10:45:03.082654] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.018 [2024-06-10 10:45:03.082660] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.018 [2024-06-10 10:45:03.082666] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.018 [2024-06-10 10:45:03.082823] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:19:39.018 [2024-06-10 10:45:03.082951] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:19:39.018 [2024-06-10 10:45:03.083107] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.018 [2024-06-10 10:45:03.083108] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:39.595 [2024-06-10 10:45:03.731987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:39.595 Malloc0 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.595 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:39.596 [2024-06-10 10:45:03.785252] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:39.596 [2024-06-10 10:45:03.785565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.596 { 00:19:39.596 "params": { 00:19:39.596 "name": "Nvme$subsystem", 00:19:39.596 "trtype": "$TEST_TRANSPORT", 00:19:39.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.596 "adrfam": "ipv4", 00:19:39.596 "trsvcid": "$NVMF_PORT", 00:19:39.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.596 "hdgst": ${hdgst:-false}, 00:19:39.596 "ddgst": ${ddgst:-false} 00:19:39.596 }, 00:19:39.596 "method": "bdev_nvme_attach_controller" 00:19:39.596 } 00:19:39.596 EOF 00:19:39.596 )") 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:39.596 10:45:03 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:39.596 "params": { 00:19:39.596 "name": "Nvme1", 00:19:39.596 "trtype": "tcp", 00:19:39.596 "traddr": "10.0.0.2", 00:19:39.596 "adrfam": "ipv4", 00:19:39.596 "trsvcid": "4420", 00:19:39.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.596 "hdgst": false, 00:19:39.596 "ddgst": false 00:19:39.596 }, 00:19:39.596 "method": "bdev_nvme_attach_controller" 00:19:39.596 }' 00:19:39.596 [2024-06-10 10:45:03.851018] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:19:39.596 [2024-06-10 10:45:03.851097] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid858325 ] 00:19:39.856 [2024-06-10 10:45:03.922431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.856 [2024-06-10 10:45:04.018604] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.856 [2024-06-10 10:45:04.018721] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.856 [2024-06-10 10:45:04.018724] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.116 I/O targets: 00:19:40.116 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:40.116 00:19:40.116 00:19:40.116 CUnit - A unit testing framework for C - Version 2.1-3 00:19:40.116 http://cunit.sourceforge.net/ 00:19:40.116 00:19:40.116 00:19:40.116 Suite: bdevio tests on: Nvme1n1 00:19:40.116 Test: blockdev write read block ...passed 00:19:40.116 Test: blockdev write zeroes read block ...passed 00:19:40.116 Test: blockdev write zeroes read no split ...passed 00:19:40.116 Test: blockdev write zeroes read split ...passed 00:19:40.116 Test: blockdev write zeroes read split partial ...passed 00:19:40.116 Test: blockdev reset ...[2024-06-10 10:45:04.347686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.116 [2024-06-10 10:45:04.347744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4b900 (9): Bad file descriptor 00:19:40.376 [2024-06-10 10:45:04.485028] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:40.376 passed 00:19:40.376 Test: blockdev write read 8 blocks ...passed 00:19:40.376 Test: blockdev write read size > 128k ...passed 00:19:40.376 Test: blockdev write read invalid size ...passed 00:19:40.376 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:40.376 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:40.376 Test: blockdev write read max offset ...passed 00:19:40.376 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:40.376 Test: blockdev writev readv 8 blocks ...passed 00:19:40.637 Test: blockdev writev readv 30 x 1block ...passed 00:19:40.637 Test: blockdev writev readv block ...passed 00:19:40.637 Test: blockdev writev readv size > 128k ...passed 00:19:40.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:40.637 Test: blockdev comparev and writev ...[2024-06-10 10:45:04.752494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.637 [2024-06-10 10:45:04.752519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.752530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.637 [2024-06-10 10:45:04.752539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.753047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.637 [2024-06-10 10:45:04.753056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.753066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.637 [2024-06-10 10:45:04.753071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.753579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.637 [2024-06-10 10:45:04.753587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.753597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.637 [2024-06-10 10:45:04.753602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.754143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.637 [2024-06-10 10:45:04.754151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.754160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.637 [2024-06-10 10:45:04.754166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:40.637 passed 00:19:40.637 Test: blockdev nvme passthru rw ...passed 00:19:40.637 Test: blockdev nvme passthru vendor specific ...[2024-06-10 10:45:04.839248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.637 [2024-06-10 10:45:04.839259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.839563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.637 [2024-06-10 10:45:04.839572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.839954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.637 [2024-06-10 10:45:04.839962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:40.637 [2024-06-10 10:45:04.840346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.637 [2024-06-10 10:45:04.840355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:40.637 passed 00:19:40.637 Test: blockdev nvme admin passthru ...passed 00:19:40.637 Test: blockdev copy ...passed 00:19:40.637 00:19:40.637 Run Summary: Type Total Ran Passed Failed Inactive 00:19:40.637 suites 1 1 n/a 0 0 00:19:40.637 tests 23 23 23 0 0 00:19:40.637 asserts 152 152 152 0 n/a 00:19:40.637 00:19:40.637 Elapsed time = 1.483 seconds 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.897 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.159 rmmod nvme_tcp 00:19:41.159 rmmod nvme_fabrics 00:19:41.159 rmmod nvme_keyring 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 858133 ']' 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 858133 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 858133 ']' 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 858133 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 858133 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 858133' 00:19:41.159 killing process with pid 858133 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 858133 00:19:41.159 [2024-06-10 10:45:05.303898] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:41.159 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 858133 00:19:41.420 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:41.420 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:41.420 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:41.420 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.420 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.420 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.420 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.420 10:45:05 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.968 10:45:07 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.968 00:19:43.968 real 0m12.313s 00:19:43.968 user 0m14.249s 00:19:43.968 sys 0m6.405s 00:19:43.968 10:45:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:43.968 10:45:07 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:43.968 ************************************ 00:19:43.968 END TEST nvmf_bdevio_no_huge 00:19:43.968 ************************************ 00:19:43.968 10:45:07 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:43.968 10:45:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:43.968 10:45:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:43.968 10:45:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.968 ************************************ 00:19:43.968 START TEST nvmf_tls 00:19:43.968 ************************************ 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:43.968 * Looking for test storage... 00:19:43.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.968 10:45:07 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.969 10:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:52.111 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:52.111 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:52.111 Found net devices under 0000:31:00.0: cvl_0_0 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:52.111 Found net devices under 0000:31:00.1: cvl_0_1 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:52.111 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:52.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:19:52.112 00:19:52.112 --- 10.0.0.2 ping statistics --- 00:19:52.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.112 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:52.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:19:52.112 00:19:52.112 --- 10.0.0.1 ping statistics --- 00:19:52.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.112 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=863328 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 863328 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 863328 ']' 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:52.112 10:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.112 [2024-06-10 10:45:15.468728] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:19:52.112 [2024-06-10 10:45:15.468779] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.112 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.112 [2024-06-10 10:45:15.555557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.112 [2024-06-10 10:45:15.647942] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.112 [2024-06-10 10:45:15.648000] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.112 [2024-06-10 10:45:15.648008] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.112 [2024-06-10 10:45:15.648015] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.112 [2024-06-10 10:45:15.648021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.112 [2024-06-10 10:45:15.648046] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.112 10:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:52.112 10:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:19:52.112 10:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.112 10:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:52.112 10:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.112 10:45:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.112 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:52.112 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:52.373 true 00:19:52.373 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:52.373 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:52.373 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:52.373 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:52.373 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:52.634 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:52.634 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:52.896 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:52.896 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:52.896 10:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:52.896 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:52.896 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:53.156 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:53.156 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:53.156 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.156 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:53.417 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:53.417 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:53.417 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:53.417 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.417 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:53.677 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:53.677 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:53.677 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:53.677 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.677 10:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.aqpEFTpTN2 00:19:53.937 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:53.938 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.qyetXWRKiN 00:19:53.938 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:53.938 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:53.938 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.aqpEFTpTN2 00:19:53.938 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.qyetXWRKiN 00:19:53.938 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:54.197 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:54.457 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.aqpEFTpTN2 00:19:54.457 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aqpEFTpTN2 00:19:54.457 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:54.457 [2024-06-10 10:45:18.643879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.457 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:54.716 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:54.716 [2024-06-10 10:45:18.948607] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:54.716 [2024-06-10 10:45:18.948645] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.716 [2024-06-10 10:45:18.948808] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.716 10:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.977 malloc0 00:19:54.977 10:45:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.238 10:45:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aqpEFTpTN2 00:19:55.238 [2024-06-10 10:45:19.387582] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:55.238 10:45:19 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aqpEFTpTN2 00:19:55.238 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.239 Initializing NVMe Controllers 00:20:05.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:05.239 Initialization complete. Launching workers. 00:20:05.239 ======================================================== 00:20:05.239 Latency(us) 00:20:05.239 Device Information : IOPS MiB/s Average min max 00:20:05.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19111.68 74.66 3348.75 1117.82 4244.31 00:20:05.239 ======================================================== 00:20:05.239 Total : 19111.68 74.66 3348.75 1117.82 4244.31 00:20:05.239 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aqpEFTpTN2 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aqpEFTpTN2' 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=866062 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 866062 /var/tmp/bdevperf.sock 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 866062 ']' 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:05.239 10:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.499 [2024-06-10 10:45:29.532513] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:05.499 [2024-06-10 10:45:29.532571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866062 ] 00:20:05.499 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.499 [2024-06-10 10:45:29.583208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.499 [2024-06-10 10:45:29.635271] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.071 10:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:06.071 10:45:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:06.071 10:45:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aqpEFTpTN2 00:20:06.331 [2024-06-10 10:45:30.428153] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.331 [2024-06-10 10:45:30.428219] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:06.331 TLSTESTn1 00:20:06.331 10:45:30 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:06.331 Running I/O for 10 seconds... 00:20:18.648 00:20:18.648 Latency(us) 00:20:18.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.648 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:18.648 Verification LBA range: start 0x0 length 0x2000 00:20:18.648 TLSTESTn1 : 10.03 4799.72 18.75 0.00 0.00 26623.31 6007.47 49370.45 00:20:18.648 =================================================================================================================== 00:20:18.648 Total : 4799.72 18.75 0.00 0.00 26623.31 6007.47 49370.45 00:20:18.648 0 00:20:18.648 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:18.648 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 866062 00:20:18.648 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 866062 ']' 00:20:18.648 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 866062 00:20:18.648 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:18.648 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:18.648 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 866062 00:20:18.648 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 866062' 00:20:18.649 killing process with pid 866062 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 866062 00:20:18.649 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.649 00:20:18.649 Latency(us) 00:20:18.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.649 =================================================================================================================== 00:20:18.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.649 [2024-06-10 10:45:40.724888] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 866062 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qyetXWRKiN 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qyetXWRKiN 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qyetXWRKiN 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qyetXWRKiN' 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=868328 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 868328 /var/tmp/bdevperf.sock 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 868328 ']' 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:18.649 10:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.649 [2024-06-10 10:45:40.888616] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:18.649 [2024-06-10 10:45:40.888669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868328 ] 00:20:18.649 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.649 [2024-06-10 10:45:40.938350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.649 [2024-06-10 10:45:40.989963] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qyetXWRKiN 00:20:18.649 [2024-06-10 10:45:41.811131] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.649 [2024-06-10 10:45:41.811191] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:18.649 [2024-06-10 10:45:41.818420] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:18.649 [2024-06-10 10:45:41.819212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022880 (107): Transport endpoint is not connected 00:20:18.649 [2024-06-10 10:45:41.820209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2022880 (9): Bad file descriptor 00:20:18.649 [2024-06-10 10:45:41.821210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.649 [2024-06-10 10:45:41.821218] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:18.649 [2024-06-10 10:45:41.821224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.649 request: 00:20:18.649 { 00:20:18.649 "name": "TLSTEST", 00:20:18.649 "trtype": "tcp", 00:20:18.649 "traddr": "10.0.0.2", 00:20:18.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.649 "adrfam": "ipv4", 00:20:18.649 "trsvcid": "4420", 00:20:18.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.649 "psk": "/tmp/tmp.qyetXWRKiN", 00:20:18.649 "method": "bdev_nvme_attach_controller", 00:20:18.649 "req_id": 1 00:20:18.649 } 00:20:18.649 Got JSON-RPC error response 00:20:18.649 response: 00:20:18.649 { 00:20:18.649 "code": -5, 00:20:18.649 "message": "Input/output error" 00:20:18.649 } 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 868328 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 868328 ']' 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 868328 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 868328 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 868328' 00:20:18.649 killing process with pid 868328 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 868328 00:20:18.649 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.649 00:20:18.649 Latency(us) 00:20:18.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.649 =================================================================================================================== 00:20:18.649 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.649 [2024-06-10 10:45:41.905406] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:18.649 10:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 868328 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aqpEFTpTN2 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aqpEFTpTN2 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aqpEFTpTN2 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aqpEFTpTN2' 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=868424 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 868424 /var/tmp/bdevperf.sock 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 868424 ']' 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:18.649 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.649 [2024-06-10 10:45:42.069403] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:18.650 [2024-06-10 10:45:42.069456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868424 ] 00:20:18.650 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.650 [2024-06-10 10:45:42.120863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.650 [2024-06-10 10:45:42.172851] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.650 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:18.650 10:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:18.650 10:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.aqpEFTpTN2 00:20:18.911 [2024-06-10 10:45:42.978001] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.911 [2024-06-10 10:45:42.978068] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:18.911 [2024-06-10 10:45:42.982389] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:18.911 [2024-06-10 10:45:42.982407] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:18.911 [2024-06-10 10:45:42.982425] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:18.911 [2024-06-10 10:45:42.983077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad3880 (107): Transport endpoint is not connected 00:20:18.911 [2024-06-10 10:45:42.984071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad3880 (9): Bad file descriptor 00:20:18.911 [2024-06-10 10:45:42.985074] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.911 [2024-06-10 10:45:42.985082] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:18.911 [2024-06-10 10:45:42.985088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.911 request: 00:20:18.911 { 00:20:18.911 "name": "TLSTEST", 00:20:18.911 "trtype": "tcp", 00:20:18.911 "traddr": "10.0.0.2", 00:20:18.911 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:18.911 "adrfam": "ipv4", 00:20:18.911 "trsvcid": "4420", 00:20:18.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.911 "psk": "/tmp/tmp.aqpEFTpTN2", 00:20:18.911 "method": "bdev_nvme_attach_controller", 00:20:18.911 "req_id": 1 00:20:18.911 } 00:20:18.911 Got JSON-RPC error response 00:20:18.911 response: 00:20:18.911 { 00:20:18.911 "code": -5, 00:20:18.911 "message": "Input/output error" 00:20:18.911 } 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 868424 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 868424 ']' 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 868424 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 868424 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 868424' 00:20:18.911 killing process with pid 868424 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 868424 00:20:18.911 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.911 00:20:18.911 Latency(us) 00:20:18.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.911 =================================================================================================================== 00:20:18.911 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.911 [2024-06-10 10:45:43.066938] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 868424 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:18.911 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aqpEFTpTN2 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aqpEFTpTN2 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aqpEFTpTN2 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aqpEFTpTN2' 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=868767 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 868767 /var/tmp/bdevperf.sock 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 868767 ']' 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:18.912 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.172 [2024-06-10 10:45:43.223542] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:19.172 [2024-06-10 10:45:43.223595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868767 ] 00:20:19.172 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.172 [2024-06-10 10:45:43.272989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.172 [2024-06-10 10:45:43.325701] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.745 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:19.745 10:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:19.745 10:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aqpEFTpTN2 00:20:20.007 [2024-06-10 10:45:44.106371] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.007 [2024-06-10 10:45:44.106430] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:20.007 [2024-06-10 10:45:44.113914] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:20.007 [2024-06-10 10:45:44.113931] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:20.007 [2024-06-10 10:45:44.113948] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:20.007 [2024-06-10 10:45:44.114456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c1880 (107): Transport endpoint is not connected 00:20:20.007 [2024-06-10 10:45:44.115452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c1880 (9): Bad file descriptor 00:20:20.007 [2024-06-10 10:45:44.116453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:20.007 [2024-06-10 10:45:44.116461] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:20.007 [2024-06-10 10:45:44.116469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:20.007 request: 00:20:20.007 { 00:20:20.007 "name": "TLSTEST", 00:20:20.007 "trtype": "tcp", 00:20:20.007 "traddr": "10.0.0.2", 00:20:20.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.007 "adrfam": "ipv4", 00:20:20.007 "trsvcid": "4420", 00:20:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.007 "psk": "/tmp/tmp.aqpEFTpTN2", 00:20:20.007 "method": "bdev_nvme_attach_controller", 00:20:20.007 "req_id": 1 00:20:20.007 } 00:20:20.007 Got JSON-RPC error response 00:20:20.007 response: 00:20:20.007 { 00:20:20.007 "code": -5, 00:20:20.007 "message": "Input/output error" 00:20:20.007 } 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 868767 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 868767 ']' 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 868767 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 868767 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 868767' 00:20:20.007 killing process with pid 868767 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 868767 00:20:20.007 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.007 00:20:20.007 Latency(us) 00:20:20.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.007 =================================================================================================================== 00:20:20.007 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.007 [2024-06-10 10:45:44.183712] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 868767 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=869033 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.007 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 869033 /var/tmp/bdevperf.sock 00:20:20.268 10:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.268 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 869033 ']' 00:20:20.268 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.268 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:20.268 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.268 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:20.268 10:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.268 [2024-06-10 10:45:44.349563] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:20.268 [2024-06-10 10:45:44.349633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869033 ] 00:20:20.268 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.268 [2024-06-10 10:45:44.399493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.269 [2024-06-10 10:45:44.451684] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.839 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:20.839 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:20.839 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:21.101 [2024-06-10 10:45:45.246694] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:21.101 [2024-06-10 10:45:45.248015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23371e0 (9): Bad file descriptor 00:20:21.101 [2024-06-10 10:45:45.249015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.101 [2024-06-10 10:45:45.249024] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:21.101 [2024-06-10 10:45:45.249031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.101 request: 00:20:21.101 { 00:20:21.101 "name": "TLSTEST", 00:20:21.101 "trtype": "tcp", 00:20:21.101 "traddr": "10.0.0.2", 00:20:21.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.101 "adrfam": "ipv4", 00:20:21.101 "trsvcid": "4420", 00:20:21.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.101 "method": "bdev_nvme_attach_controller", 00:20:21.101 "req_id": 1 00:20:21.101 } 00:20:21.101 Got JSON-RPC error response 00:20:21.101 response: 00:20:21.101 { 00:20:21.101 "code": -5, 00:20:21.101 "message": "Input/output error" 00:20:21.101 } 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 869033 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 869033 ']' 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 869033 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 869033 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 869033' 00:20:21.101 killing process with pid 869033 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 869033 00:20:21.101 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.101 00:20:21.101 Latency(us) 00:20:21.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.101 =================================================================================================================== 00:20:21.101 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.101 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 869033 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 863328 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 863328 ']' 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 863328 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 863328 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 863328' 00:20:21.362 killing process with pid 863328 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 863328 00:20:21.362 [2024-06-10 10:45:45.480736] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:21.362 [2024-06-10 10:45:45.480761] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 863328 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.LLy1V17JdC 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:21.362 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.LLy1V17JdC 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=869208 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 869208 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 869208 ']' 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:21.659 10:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.659 [2024-06-10 10:45:45.712432] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:21.659 [2024-06-10 10:45:45.712489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.659 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.659 [2024-06-10 10:45:45.792945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.659 [2024-06-10 10:45:45.847675] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.659 [2024-06-10 10:45:45.847706] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.659 [2024-06-10 10:45:45.847712] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.659 [2024-06-10 10:45:45.847716] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.659 [2024-06-10 10:45:45.847720] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.659 [2024-06-10 10:45:45.847734] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.231 10:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:22.231 10:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:22.231 10:45:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.231 10:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:22.231 10:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.231 10:45:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.231 10:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.LLy1V17JdC 00:20:22.231 10:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LLy1V17JdC 00:20:22.231 10:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:22.492 [2024-06-10 10:45:46.649861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.492 10:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:22.752 10:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:22.752 [2024-06-10 10:45:46.942562] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:22.752 [2024-06-10 10:45:46.942595] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.752 [2024-06-10 10:45:46.942745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.752 10:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:23.012 malloc0 00:20:23.012 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:23.012 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LLy1V17JdC 00:20:23.273 [2024-06-10 10:45:47.365380] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LLy1V17JdC 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LLy1V17JdC' 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=869575 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 869575 /var/tmp/bdevperf.sock 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 869575 ']' 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:23.273 10:45:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.273 [2024-06-10 10:45:47.413561] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:23.273 [2024-06-10 10:45:47.413612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869575 ] 00:20:23.273 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.273 [2024-06-10 10:45:47.468659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.273 [2024-06-10 10:45:47.520886] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.216 10:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:24.216 10:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:24.216 10:45:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LLy1V17JdC 00:20:24.216 [2024-06-10 10:45:48.330111] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.216 [2024-06-10 10:45:48.330173] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:24.216 TLSTESTn1 00:20:24.216 10:45:48 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:24.477 Running I/O for 10 seconds... 00:20:34.473 00:20:34.473 Latency(us) 00:20:34.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.473 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.473 Verification LBA range: start 0x0 length 0x2000 00:20:34.473 TLSTESTn1 : 10.02 5301.95 20.71 0.00 0.00 24104.76 4587.52 96556.37 00:20:34.473 =================================================================================================================== 00:20:34.473 Total : 5301.95 20.71 0.00 0.00 24104.76 4587.52 96556.37 00:20:34.473 0 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 869575 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 869575 ']' 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 869575 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 869575 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 869575' 00:20:34.473 killing process with pid 869575 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 869575 00:20:34.473 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.473 00:20:34.473 Latency(us) 00:20:34.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.473 =================================================================================================================== 00:20:34.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.473 [2024-06-10 10:45:58.627340] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 869575 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.LLy1V17JdC 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LLy1V17JdC 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LLy1V17JdC 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LLy1V17JdC 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LLy1V17JdC' 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=871837 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 871837 /var/tmp/bdevperf.sock 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 871837 ']' 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:34.473 10:45:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.734 [2024-06-10 10:45:58.804034] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:34.734 [2024-06-10 10:45:58.804094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871837 ] 00:20:34.734 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.734 [2024-06-10 10:45:58.854811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.734 [2024-06-10 10:45:58.905179] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.304 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:35.304 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:35.304 10:45:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LLy1V17JdC 00:20:35.565 [2024-06-10 10:45:59.710222] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.565 [2024-06-10 10:45:59.710268] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:35.565 [2024-06-10 10:45:59.710274] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.LLy1V17JdC 00:20:35.565 request: 00:20:35.565 { 00:20:35.565 "name": "TLSTEST", 00:20:35.565 "trtype": "tcp", 00:20:35.565 "traddr": "10.0.0.2", 00:20:35.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.565 "adrfam": "ipv4", 00:20:35.565 "trsvcid": "4420", 00:20:35.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.565 "psk": "/tmp/tmp.LLy1V17JdC", 00:20:35.565 "method": "bdev_nvme_attach_controller", 00:20:35.565 "req_id": 1 00:20:35.565 } 00:20:35.565 Got JSON-RPC error response 00:20:35.565 response: 00:20:35.565 { 00:20:35.565 "code": -1, 00:20:35.565 "message": "Operation not permitted" 00:20:35.565 } 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 871837 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 871837 ']' 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 871837 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 871837 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 871837' 00:20:35.565 killing process with pid 871837 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 871837 00:20:35.565 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.565 00:20:35.565 Latency(us) 00:20:35.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.565 =================================================================================================================== 00:20:35.565 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.565 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 871837 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 869208 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 869208 ']' 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 869208 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 869208 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 869208' 00:20:35.824 killing process with pid 869208 00:20:35.824 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 869208 00:20:35.824 [2024-06-10 10:45:59.959147] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:35.825 [2024-06-10 10:45:59.959183] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:35.825 10:45:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 869208 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=872182 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 872182 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 872182 ']' 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:35.825 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.085 [2024-06-10 10:46:00.140898] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:36.085 [2024-06-10 10:46:00.140973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.085 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.085 [2024-06-10 10:46:00.227550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.085 [2024-06-10 10:46:00.282101] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.085 [2024-06-10 10:46:00.282135] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.085 [2024-06-10 10:46:00.282141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.085 [2024-06-10 10:46:00.282145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.085 [2024-06-10 10:46:00.282150] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.085 [2024-06-10 10:46:00.282170] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.LLy1V17JdC 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LLy1V17JdC 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.LLy1V17JdC 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LLy1V17JdC 00:20:36.652 10:46:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:36.913 [2024-06-10 10:46:01.075915] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.913 10:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:37.174 10:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:37.174 [2024-06-10 10:46:01.380654] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:37.174 [2024-06-10 10:46:01.380694] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.174 [2024-06-10 10:46:01.380847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.174 10:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:37.434 malloc0 00:20:37.434 10:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:37.434 10:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LLy1V17JdC 00:20:37.696 [2024-06-10 10:46:01.807621] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:37.696 [2024-06-10 10:46:01.807641] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:37.696 [2024-06-10 10:46:01.807660] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:37.696 request: 00:20:37.696 { 00:20:37.696 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.696 "host": "nqn.2016-06.io.spdk:host1", 00:20:37.696 "psk": "/tmp/tmp.LLy1V17JdC", 00:20:37.696 "method": "nvmf_subsystem_add_host", 00:20:37.696 "req_id": 1 00:20:37.696 } 00:20:37.696 Got JSON-RPC error response 00:20:37.696 response: 00:20:37.696 { 00:20:37.696 "code": -32603, 00:20:37.696 "message": "Internal error" 00:20:37.696 } 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 872182 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 872182 ']' 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 872182 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 872182 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 872182' 00:20:37.696 killing process with pid 872182 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 872182 00:20:37.696 [2024-06-10 10:46:01.876653] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:37.696 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 872182 00:20:37.958 10:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.LLy1V17JdC 00:20:37.958 10:46:01 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:37.958 10:46:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.958 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:37.958 10:46:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.958 10:46:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=872549 00:20:37.958 10:46:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 872549 00:20:37.958 10:46:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:37.958 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 872549 ']' 00:20:37.958 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.958 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:37.958 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.959 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:37.959 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.959 [2024-06-10 10:46:02.053041] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:37.959 [2024-06-10 10:46:02.053091] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.959 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.959 [2024-06-10 10:46:02.134701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.959 [2024-06-10 10:46:02.186117] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.959 [2024-06-10 10:46:02.186152] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.959 [2024-06-10 10:46:02.186157] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.959 [2024-06-10 10:46:02.186162] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.959 [2024-06-10 10:46:02.186166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.959 [2024-06-10 10:46:02.186183] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.529 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:38.529 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:38.529 10:46:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.529 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:38.529 10:46:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.789 10:46:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.789 10:46:02 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.LLy1V17JdC 00:20:38.789 10:46:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LLy1V17JdC 00:20:38.789 10:46:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:38.789 [2024-06-10 10:46:02.972165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.789 10:46:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.048 10:46:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.048 [2024-06-10 10:46:03.280907] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:39.048 [2024-06-10 10:46:03.280947] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.048 [2024-06-10 10:46:03.281112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.048 10:46:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.308 malloc0 00:20:39.308 10:46:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LLy1V17JdC 00:20:39.570 [2024-06-10 10:46:03.740045] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=872911 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 872911 /var/tmp/bdevperf.sock 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 872911 ']' 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:39.570 10:46:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.570 [2024-06-10 10:46:03.803414] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:39.570 [2024-06-10 10:46:03.803462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872911 ] 00:20:39.570 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.570 [2024-06-10 10:46:03.852729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.831 [2024-06-10 10:46:03.904905] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.400 10:46:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:40.400 10:46:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:40.400 10:46:04 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LLy1V17JdC 00:20:40.661 [2024-06-10 10:46:04.701849] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.661 [2024-06-10 10:46:04.701908] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.661 TLSTESTn1 00:20:40.661 10:46:04 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:40.926 10:46:05 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:40.926 "subsystems": [ 00:20:40.926 { 00:20:40.926 "subsystem": "keyring", 00:20:40.926 "config": [] 00:20:40.926 }, 00:20:40.926 { 00:20:40.926 "subsystem": "iobuf", 00:20:40.926 "config": [ 00:20:40.926 { 00:20:40.926 "method": "iobuf_set_options", 00:20:40.926 "params": { 00:20:40.926 "small_pool_count": 8192, 00:20:40.926 "large_pool_count": 1024, 00:20:40.926 "small_bufsize": 8192, 00:20:40.926 "large_bufsize": 135168 00:20:40.926 } 00:20:40.926 } 00:20:40.926 ] 00:20:40.926 }, 00:20:40.926 { 00:20:40.926 "subsystem": "sock", 00:20:40.926 "config": [ 00:20:40.926 { 00:20:40.926 "method": "sock_set_default_impl", 00:20:40.926 "params": { 00:20:40.926 "impl_name": "posix" 00:20:40.926 } 00:20:40.926 }, 00:20:40.926 { 00:20:40.926 "method": "sock_impl_set_options", 00:20:40.926 "params": { 00:20:40.926 "impl_name": "ssl", 00:20:40.926 "recv_buf_size": 4096, 00:20:40.926 "send_buf_size": 4096, 00:20:40.926 "enable_recv_pipe": true, 00:20:40.926 "enable_quickack": false, 00:20:40.926 "enable_placement_id": 0, 00:20:40.926 "enable_zerocopy_send_server": true, 00:20:40.926 "enable_zerocopy_send_client": false, 00:20:40.926 "zerocopy_threshold": 0, 00:20:40.926 "tls_version": 0, 00:20:40.926 "enable_ktls": false 00:20:40.926 } 00:20:40.926 }, 00:20:40.926 { 00:20:40.926 "method": "sock_impl_set_options", 00:20:40.926 "params": { 00:20:40.926 "impl_name": "posix", 00:20:40.926 "recv_buf_size": 2097152, 00:20:40.926 "send_buf_size": 2097152, 00:20:40.926 "enable_recv_pipe": true, 00:20:40.926 "enable_quickack": false, 00:20:40.926 "enable_placement_id": 0, 00:20:40.926 "enable_zerocopy_send_server": true, 00:20:40.926 "enable_zerocopy_send_client": false, 00:20:40.926 "zerocopy_threshold": 0, 00:20:40.926 "tls_version": 0, 00:20:40.926 "enable_ktls": false 00:20:40.926 } 00:20:40.926 } 00:20:40.926 ] 00:20:40.926 }, 00:20:40.926 { 00:20:40.926 "subsystem": "vmd", 00:20:40.926 "config": [] 00:20:40.926 }, 00:20:40.926 { 00:20:40.926 "subsystem": "accel", 00:20:40.926 "config": [ 00:20:40.926 { 00:20:40.926 "method": "accel_set_options", 00:20:40.926 "params": { 00:20:40.926 "small_cache_size": 128, 00:20:40.926 "large_cache_size": 16, 00:20:40.926 "task_count": 2048, 00:20:40.926 "sequence_count": 2048, 00:20:40.926 "buf_count": 2048 00:20:40.926 } 00:20:40.926 } 00:20:40.926 ] 00:20:40.926 }, 00:20:40.926 { 00:20:40.926 "subsystem": "bdev", 00:20:40.926 "config": [ 00:20:40.926 { 00:20:40.926 "method": "bdev_set_options", 00:20:40.926 "params": { 00:20:40.926 "bdev_io_pool_size": 65535, 00:20:40.926 "bdev_io_cache_size": 256, 00:20:40.926 "bdev_auto_examine": true, 00:20:40.926 "iobuf_small_cache_size": 128, 00:20:40.926 "iobuf_large_cache_size": 16 00:20:40.926 } 00:20:40.926 }, 00:20:40.926 { 00:20:40.926 "method": "bdev_raid_set_options", 00:20:40.926 "params": { 00:20:40.926 "process_window_size_kb": 1024 00:20:40.926 } 00:20:40.926 }, 00:20:40.926 { 00:20:40.926 "method": "bdev_iscsi_set_options", 00:20:40.927 "params": { 00:20:40.927 "timeout_sec": 30 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "bdev_nvme_set_options", 00:20:40.927 "params": { 00:20:40.927 "action_on_timeout": "none", 00:20:40.927 "timeout_us": 0, 00:20:40.927 "timeout_admin_us": 0, 00:20:40.927 "keep_alive_timeout_ms": 10000, 00:20:40.927 "arbitration_burst": 0, 00:20:40.927 "low_priority_weight": 0, 00:20:40.927 "medium_priority_weight": 0, 00:20:40.927 "high_priority_weight": 0, 00:20:40.927 "nvme_adminq_poll_period_us": 10000, 00:20:40.927 "nvme_ioq_poll_period_us": 0, 00:20:40.927 "io_queue_requests": 0, 00:20:40.927 "delay_cmd_submit": true, 00:20:40.927 "transport_retry_count": 4, 00:20:40.927 "bdev_retry_count": 3, 00:20:40.927 "transport_ack_timeout": 0, 00:20:40.927 "ctrlr_loss_timeout_sec": 0, 00:20:40.927 "reconnect_delay_sec": 0, 00:20:40.927 "fast_io_fail_timeout_sec": 0, 00:20:40.927 "disable_auto_failback": false, 00:20:40.927 "generate_uuids": false, 00:20:40.927 "transport_tos": 0, 00:20:40.927 "nvme_error_stat": false, 00:20:40.927 "rdma_srq_size": 0, 00:20:40.927 "io_path_stat": false, 00:20:40.927 "allow_accel_sequence": false, 00:20:40.927 "rdma_max_cq_size": 0, 00:20:40.927 "rdma_cm_event_timeout_ms": 0, 00:20:40.927 "dhchap_digests": [ 00:20:40.927 "sha256", 00:20:40.927 "sha384", 00:20:40.927 "sha512" 00:20:40.927 ], 00:20:40.927 "dhchap_dhgroups": [ 00:20:40.927 "null", 00:20:40.927 "ffdhe2048", 00:20:40.927 "ffdhe3072", 00:20:40.927 "ffdhe4096", 00:20:40.927 "ffdhe6144", 00:20:40.927 "ffdhe8192" 00:20:40.927 ] 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "bdev_nvme_set_hotplug", 00:20:40.927 "params": { 00:20:40.927 "period_us": 100000, 00:20:40.927 "enable": false 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "bdev_malloc_create", 00:20:40.927 "params": { 00:20:40.927 "name": "malloc0", 00:20:40.927 "num_blocks": 8192, 00:20:40.927 "block_size": 4096, 00:20:40.927 "physical_block_size": 4096, 00:20:40.927 "uuid": "e9f5c571-8804-48a2-b276-1036bc04d503", 00:20:40.927 "optimal_io_boundary": 0 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "bdev_wait_for_examine" 00:20:40.927 } 00:20:40.927 ] 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "subsystem": "nbd", 00:20:40.927 "config": [] 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "subsystem": "scheduler", 00:20:40.927 "config": [ 00:20:40.927 { 00:20:40.927 "method": "framework_set_scheduler", 00:20:40.927 "params": { 00:20:40.927 "name": "static" 00:20:40.927 } 00:20:40.927 } 00:20:40.927 ] 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "subsystem": "nvmf", 00:20:40.927 "config": [ 00:20:40.927 { 00:20:40.927 "method": "nvmf_set_config", 00:20:40.927 "params": { 00:20:40.927 "discovery_filter": "match_any", 00:20:40.927 "admin_cmd_passthru": { 00:20:40.927 "identify_ctrlr": false 00:20:40.927 } 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "nvmf_set_max_subsystems", 00:20:40.927 "params": { 00:20:40.927 "max_subsystems": 1024 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "nvmf_set_crdt", 00:20:40.927 "params": { 00:20:40.927 "crdt1": 0, 00:20:40.927 "crdt2": 0, 00:20:40.927 "crdt3": 0 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "nvmf_create_transport", 00:20:40.927 "params": { 00:20:40.927 "trtype": "TCP", 00:20:40.927 "max_queue_depth": 128, 00:20:40.927 "max_io_qpairs_per_ctrlr": 127, 00:20:40.927 "in_capsule_data_size": 4096, 00:20:40.927 "max_io_size": 131072, 00:20:40.927 "io_unit_size": 131072, 00:20:40.927 "max_aq_depth": 128, 00:20:40.927 "num_shared_buffers": 511, 00:20:40.927 "buf_cache_size": 4294967295, 00:20:40.927 "dif_insert_or_strip": false, 00:20:40.927 "zcopy": false, 00:20:40.927 "c2h_success": false, 00:20:40.927 "sock_priority": 0, 00:20:40.927 "abort_timeout_sec": 1, 00:20:40.927 "ack_timeout": 0, 00:20:40.927 "data_wr_pool_size": 0 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "nvmf_create_subsystem", 00:20:40.927 "params": { 00:20:40.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.927 "allow_any_host": false, 00:20:40.927 "serial_number": "SPDK00000000000001", 00:20:40.927 "model_number": "SPDK bdev Controller", 00:20:40.927 "max_namespaces": 10, 00:20:40.927 "min_cntlid": 1, 00:20:40.927 "max_cntlid": 65519, 00:20:40.927 "ana_reporting": false 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "nvmf_subsystem_add_host", 00:20:40.927 "params": { 00:20:40.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.927 "host": "nqn.2016-06.io.spdk:host1", 00:20:40.927 "psk": "/tmp/tmp.LLy1V17JdC" 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "nvmf_subsystem_add_ns", 00:20:40.927 "params": { 00:20:40.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.927 "namespace": { 00:20:40.927 "nsid": 1, 00:20:40.927 "bdev_name": "malloc0", 00:20:40.927 "nguid": "E9F5C571880448A2B2761036BC04D503", 00:20:40.927 "uuid": "e9f5c571-8804-48a2-b276-1036bc04d503", 00:20:40.927 "no_auto_visible": false 00:20:40.927 } 00:20:40.927 } 00:20:40.927 }, 00:20:40.927 { 00:20:40.927 "method": "nvmf_subsystem_add_listener", 00:20:40.927 "params": { 00:20:40.927 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.927 "listen_address": { 00:20:40.927 "trtype": "TCP", 00:20:40.927 "adrfam": "IPv4", 00:20:40.927 "traddr": "10.0.0.2", 00:20:40.927 "trsvcid": "4420" 00:20:40.927 }, 00:20:40.927 "secure_channel": true 00:20:40.927 } 00:20:40.927 } 00:20:40.927 ] 00:20:40.927 } 00:20:40.927 ] 00:20:40.927 }' 00:20:40.927 10:46:05 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:41.186 10:46:05 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:41.186 "subsystems": [ 00:20:41.186 { 00:20:41.186 "subsystem": "keyring", 00:20:41.187 "config": [] 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "subsystem": "iobuf", 00:20:41.187 "config": [ 00:20:41.187 { 00:20:41.187 "method": "iobuf_set_options", 00:20:41.187 "params": { 00:20:41.187 "small_pool_count": 8192, 00:20:41.187 "large_pool_count": 1024, 00:20:41.187 "small_bufsize": 8192, 00:20:41.187 "large_bufsize": 135168 00:20:41.187 } 00:20:41.187 } 00:20:41.187 ] 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "subsystem": "sock", 00:20:41.187 "config": [ 00:20:41.187 { 00:20:41.187 "method": "sock_set_default_impl", 00:20:41.187 "params": { 00:20:41.187 "impl_name": "posix" 00:20:41.187 } 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "method": "sock_impl_set_options", 00:20:41.187 "params": { 00:20:41.187 "impl_name": "ssl", 00:20:41.187 "recv_buf_size": 4096, 00:20:41.187 "send_buf_size": 4096, 00:20:41.187 "enable_recv_pipe": true, 00:20:41.187 "enable_quickack": false, 00:20:41.187 "enable_placement_id": 0, 00:20:41.187 "enable_zerocopy_send_server": true, 00:20:41.187 "enable_zerocopy_send_client": false, 00:20:41.187 "zerocopy_threshold": 0, 00:20:41.187 "tls_version": 0, 00:20:41.187 "enable_ktls": false 00:20:41.187 } 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "method": "sock_impl_set_options", 00:20:41.187 "params": { 00:20:41.187 "impl_name": "posix", 00:20:41.187 "recv_buf_size": 2097152, 00:20:41.187 "send_buf_size": 2097152, 00:20:41.187 "enable_recv_pipe": true, 00:20:41.187 "enable_quickack": false, 00:20:41.187 "enable_placement_id": 0, 00:20:41.187 "enable_zerocopy_send_server": true, 00:20:41.187 "enable_zerocopy_send_client": false, 00:20:41.187 "zerocopy_threshold": 0, 00:20:41.187 "tls_version": 0, 00:20:41.187 "enable_ktls": false 00:20:41.187 } 00:20:41.187 } 00:20:41.187 ] 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "subsystem": "vmd", 00:20:41.187 "config": [] 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "subsystem": "accel", 00:20:41.187 "config": [ 00:20:41.187 { 00:20:41.187 "method": "accel_set_options", 00:20:41.187 "params": { 00:20:41.187 "small_cache_size": 128, 00:20:41.187 "large_cache_size": 16, 00:20:41.187 "task_count": 2048, 00:20:41.187 "sequence_count": 2048, 00:20:41.187 "buf_count": 2048 00:20:41.187 } 00:20:41.187 } 00:20:41.187 ] 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "subsystem": "bdev", 00:20:41.187 "config": [ 00:20:41.187 { 00:20:41.187 "method": "bdev_set_options", 00:20:41.187 "params": { 00:20:41.187 "bdev_io_pool_size": 65535, 00:20:41.187 "bdev_io_cache_size": 256, 00:20:41.187 "bdev_auto_examine": true, 00:20:41.187 "iobuf_small_cache_size": 128, 00:20:41.187 "iobuf_large_cache_size": 16 00:20:41.187 } 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "method": "bdev_raid_set_options", 00:20:41.187 "params": { 00:20:41.187 "process_window_size_kb": 1024 00:20:41.187 } 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "method": "bdev_iscsi_set_options", 00:20:41.187 "params": { 00:20:41.187 "timeout_sec": 30 00:20:41.187 } 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "method": "bdev_nvme_set_options", 00:20:41.187 "params": { 00:20:41.187 "action_on_timeout": "none", 00:20:41.187 "timeout_us": 0, 00:20:41.187 "timeout_admin_us": 0, 00:20:41.187 "keep_alive_timeout_ms": 10000, 00:20:41.187 "arbitration_burst": 0, 00:20:41.187 "low_priority_weight": 0, 00:20:41.187 "medium_priority_weight": 0, 00:20:41.187 "high_priority_weight": 0, 00:20:41.187 "nvme_adminq_poll_period_us": 10000, 00:20:41.187 "nvme_ioq_poll_period_us": 0, 00:20:41.187 "io_queue_requests": 512, 00:20:41.187 "delay_cmd_submit": true, 00:20:41.187 "transport_retry_count": 4, 00:20:41.187 "bdev_retry_count": 3, 00:20:41.187 "transport_ack_timeout": 0, 00:20:41.187 "ctrlr_loss_timeout_sec": 0, 00:20:41.187 "reconnect_delay_sec": 0, 00:20:41.187 "fast_io_fail_timeout_sec": 0, 00:20:41.187 "disable_auto_failback": false, 00:20:41.187 "generate_uuids": false, 00:20:41.187 "transport_tos": 0, 00:20:41.187 "nvme_error_stat": false, 00:20:41.187 "rdma_srq_size": 0, 00:20:41.187 "io_path_stat": false, 00:20:41.187 "allow_accel_sequence": false, 00:20:41.187 "rdma_max_cq_size": 0, 00:20:41.187 "rdma_cm_event_timeout_ms": 0, 00:20:41.187 "dhchap_digests": [ 00:20:41.187 "sha256", 00:20:41.187 "sha384", 00:20:41.187 "sha512" 00:20:41.187 ], 00:20:41.187 "dhchap_dhgroups": [ 00:20:41.187 "null", 00:20:41.187 "ffdhe2048", 00:20:41.187 "ffdhe3072", 00:20:41.187 "ffdhe4096", 00:20:41.187 "ffdhe6144", 00:20:41.187 "ffdhe8192" 00:20:41.187 ] 00:20:41.187 } 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "method": "bdev_nvme_attach_controller", 00:20:41.187 "params": { 00:20:41.187 "name": "TLSTEST", 00:20:41.187 "trtype": "TCP", 00:20:41.187 "adrfam": "IPv4", 00:20:41.187 "traddr": "10.0.0.2", 00:20:41.187 "trsvcid": "4420", 00:20:41.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.187 "prchk_reftag": false, 00:20:41.187 "prchk_guard": false, 00:20:41.187 "ctrlr_loss_timeout_sec": 0, 00:20:41.187 "reconnect_delay_sec": 0, 00:20:41.187 "fast_io_fail_timeout_sec": 0, 00:20:41.187 "psk": "/tmp/tmp.LLy1V17JdC", 00:20:41.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.187 "hdgst": false, 00:20:41.187 "ddgst": false 00:20:41.187 } 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "method": "bdev_nvme_set_hotplug", 00:20:41.187 "params": { 00:20:41.187 "period_us": 100000, 00:20:41.187 "enable": false 00:20:41.187 } 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "method": "bdev_wait_for_examine" 00:20:41.187 } 00:20:41.187 ] 00:20:41.187 }, 00:20:41.187 { 00:20:41.187 "subsystem": "nbd", 00:20:41.187 "config": [] 00:20:41.187 } 00:20:41.187 ] 00:20:41.187 }' 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 872911 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 872911 ']' 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 872911 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 872911 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 872911' 00:20:41.187 killing process with pid 872911 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 872911 00:20:41.187 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.187 00:20:41.187 Latency(us) 00:20:41.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.187 =================================================================================================================== 00:20:41.187 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.187 [2024-06-10 10:46:05.327882] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 872911 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 872549 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 872549 ']' 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 872549 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:41.187 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 872549 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 872549' 00:20:41.448 killing process with pid 872549 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 872549 00:20:41.448 [2024-06-10 10:46:05.493750] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:41.448 [2024-06-10 10:46:05.493782] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 872549 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.448 10:46:05 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:41.448 "subsystems": [ 00:20:41.448 { 00:20:41.448 "subsystem": "keyring", 00:20:41.448 "config": [] 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "subsystem": "iobuf", 00:20:41.448 "config": [ 00:20:41.448 { 00:20:41.448 "method": "iobuf_set_options", 00:20:41.448 "params": { 00:20:41.448 "small_pool_count": 8192, 00:20:41.448 "large_pool_count": 1024, 00:20:41.448 "small_bufsize": 8192, 00:20:41.448 "large_bufsize": 135168 00:20:41.448 } 00:20:41.448 } 00:20:41.448 ] 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "subsystem": "sock", 00:20:41.448 "config": [ 00:20:41.448 { 00:20:41.448 "method": "sock_set_default_impl", 00:20:41.448 "params": { 00:20:41.448 "impl_name": "posix" 00:20:41.448 } 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "method": "sock_impl_set_options", 00:20:41.448 "params": { 00:20:41.448 "impl_name": "ssl", 00:20:41.448 "recv_buf_size": 4096, 00:20:41.448 "send_buf_size": 4096, 00:20:41.448 "enable_recv_pipe": true, 00:20:41.448 "enable_quickack": false, 00:20:41.448 "enable_placement_id": 0, 00:20:41.448 "enable_zerocopy_send_server": true, 00:20:41.448 "enable_zerocopy_send_client": false, 00:20:41.448 "zerocopy_threshold": 0, 00:20:41.448 "tls_version": 0, 00:20:41.448 "enable_ktls": false 00:20:41.448 } 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "method": "sock_impl_set_options", 00:20:41.448 "params": { 00:20:41.448 "impl_name": "posix", 00:20:41.448 "recv_buf_size": 2097152, 00:20:41.448 "send_buf_size": 2097152, 00:20:41.448 "enable_recv_pipe": true, 00:20:41.448 "enable_quickack": false, 00:20:41.448 "enable_placement_id": 0, 00:20:41.448 "enable_zerocopy_send_server": true, 00:20:41.448 "enable_zerocopy_send_client": false, 00:20:41.448 "zerocopy_threshold": 0, 00:20:41.448 "tls_version": 0, 00:20:41.448 "enable_ktls": false 00:20:41.448 } 00:20:41.448 } 00:20:41.448 ] 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "subsystem": "vmd", 00:20:41.448 "config": [] 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "subsystem": "accel", 00:20:41.448 "config": [ 00:20:41.448 { 00:20:41.448 "method": "accel_set_options", 00:20:41.448 "params": { 00:20:41.448 "small_cache_size": 128, 00:20:41.448 "large_cache_size": 16, 00:20:41.448 "task_count": 2048, 00:20:41.448 "sequence_count": 2048, 00:20:41.448 "buf_count": 2048 00:20:41.448 } 00:20:41.448 } 00:20:41.448 ] 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "subsystem": "bdev", 00:20:41.448 "config": [ 00:20:41.448 { 00:20:41.448 "method": "bdev_set_options", 00:20:41.448 "params": { 00:20:41.448 "bdev_io_pool_size": 65535, 00:20:41.448 "bdev_io_cache_size": 256, 00:20:41.448 "bdev_auto_examine": true, 00:20:41.448 "iobuf_small_cache_size": 128, 00:20:41.448 "iobuf_large_cache_size": 16 00:20:41.448 } 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "method": "bdev_raid_set_options", 00:20:41.448 "params": { 00:20:41.448 "process_window_size_kb": 1024 00:20:41.448 } 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "method": "bdev_iscsi_set_options", 00:20:41.448 "params": { 00:20:41.448 "timeout_sec": 30 00:20:41.448 } 00:20:41.448 }, 00:20:41.448 { 00:20:41.448 "method": "bdev_nvme_set_options", 00:20:41.448 "params": { 00:20:41.448 "action_on_timeout": "none", 00:20:41.448 "timeout_us": 0, 00:20:41.448 "timeout_admin_us": 0, 00:20:41.448 "keep_alive_timeout_ms": 10000, 00:20:41.448 "arbitration_burst": 0, 00:20:41.448 "low_priority_weight": 0, 00:20:41.448 "medium_priority_weight": 0, 00:20:41.448 "high_priority_weight": 0, 00:20:41.448 "nvme_adminq_poll_period_us": 10000, 00:20:41.448 "nvme_ioq_poll_period_us": 0, 00:20:41.448 "io_queue_requests": 0, 00:20:41.448 "delay_cmd_submit": true, 00:20:41.448 "transport_retry_count": 4, 00:20:41.448 "bdev_retry_count": 3, 00:20:41.448 "transport_ack_timeout": 0, 00:20:41.448 "ctrlr_loss_timeout_sec": 0, 00:20:41.448 "reconnect_delay_sec": 0, 00:20:41.448 "fast_io_fail_timeout_sec": 0, 00:20:41.448 "disable_auto_failback": false, 00:20:41.448 "generate_uuids": false, 00:20:41.448 "transport_tos": 0, 00:20:41.448 "nvme_error_stat": false, 00:20:41.448 "rdma_srq_size": 0, 00:20:41.448 "io_path_stat": false, 00:20:41.448 "allow_accel_sequence": false, 00:20:41.448 "rdma_max_cq_size": 0, 00:20:41.448 "rdma_cm_event_timeout_ms": 0, 00:20:41.448 "dhchap_digests": [ 00:20:41.448 "sha256", 00:20:41.448 "sha384", 00:20:41.448 "sha512" 00:20:41.448 ], 00:20:41.448 "dhchap_dhgroups": [ 00:20:41.449 "null", 00:20:41.449 "ffdhe2048", 00:20:41.449 "ffdhe3072", 00:20:41.449 "ffdhe4096", 00:20:41.449 "ffdhe6144", 00:20:41.449 "ffdhe8192" 00:20:41.449 ] 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "bdev_nvme_set_hotplug", 00:20:41.449 "params": { 00:20:41.449 "period_us": 100000, 00:20:41.449 "enable": false 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "bdev_malloc_create", 00:20:41.449 "params": { 00:20:41.449 "name": "malloc0", 00:20:41.449 "num_blocks": 8192, 00:20:41.449 "block_size": 4096, 00:20:41.449 "physical_block_size": 4096, 00:20:41.449 "uuid": "e9f5c571-8804-48a2-b276-1036bc04d503", 00:20:41.449 "optimal_io_boundary": 0 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "bdev_wait_for_examine" 00:20:41.449 } 00:20:41.449 ] 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "subsystem": "nbd", 00:20:41.449 "config": [] 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "subsystem": "scheduler", 00:20:41.449 "config": [ 00:20:41.449 { 00:20:41.449 "method": "framework_set_scheduler", 00:20:41.449 "params": { 00:20:41.449 "name": "static" 00:20:41.449 } 00:20:41.449 } 00:20:41.449 ] 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "subsystem": "nvmf", 00:20:41.449 "config": [ 00:20:41.449 { 00:20:41.449 "method": "nvmf_set_config", 00:20:41.449 "params": { 00:20:41.449 "discovery_filter": "match_any", 00:20:41.449 "admin_cmd_passthru": { 00:20:41.449 "identify_ctrlr": false 00:20:41.449 } 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "nvmf_set_max_subsystems", 00:20:41.449 "params": { 00:20:41.449 "max_subsystems": 1024 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "nvmf_set_crdt", 00:20:41.449 "params": { 00:20:41.449 "crdt1": 0, 00:20:41.449 "crdt2": 0, 00:20:41.449 "crdt3": 0 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "nvmf_create_transport", 00:20:41.449 "params": { 00:20:41.449 "trtype": "TCP", 00:20:41.449 "max_queue_depth": 128, 00:20:41.449 "max_io_qpairs_per_ctrlr": 127, 00:20:41.449 "in_capsule_data_size": 4096, 00:20:41.449 "max_io_size": 131072, 00:20:41.449 "io_unit_size": 131072, 00:20:41.449 "max_aq_depth": 128, 00:20:41.449 "num_shared_buffers": 511, 00:20:41.449 "buf_cache_size": 4294967295, 00:20:41.449 "dif_insert_or_strip": false, 00:20:41.449 "zcopy": false, 00:20:41.449 "c2h_success": false, 00:20:41.449 "sock_priority": 0, 00:20:41.449 "abort_timeout_sec": 1, 00:20:41.449 "ack_timeout": 0, 00:20:41.449 "data_wr_pool_size": 0 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "nvmf_create_subsystem", 00:20:41.449 "params": { 00:20:41.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.449 "allow_any_host": false, 00:20:41.449 "serial_number": "SPDK00000000000001", 00:20:41.449 "model_number": "SPDK bdev Controller", 00:20:41.449 "max_namespaces": 10, 00:20:41.449 "min_cntlid": 1, 00:20:41.449 "max_cntlid": 65519, 00:20:41.449 "ana_reporting": false 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "nvmf_subsystem_add_host", 00:20:41.449 "params": { 00:20:41.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.449 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.449 "psk": "/tmp/tmp.LLy1V17JdC" 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "nvmf_subsystem_add_ns", 00:20:41.449 "params": { 00:20:41.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.449 "namespace": { 00:20:41.449 "nsid": 1, 00:20:41.449 "bdev_name": "malloc0", 00:20:41.449 "nguid": "E9F5C571880448A2B2761036BC04D503", 00:20:41.449 "uuid": "e9f5c571-8804-48a2-b276-1036bc04d503", 00:20:41.449 "no_auto_visible": false 00:20:41.449 } 00:20:41.449 } 00:20:41.449 }, 00:20:41.449 { 00:20:41.449 "method": "nvmf_subsystem_add_listener", 00:20:41.449 "params": { 00:20:41.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.449 "listen_address": { 00:20:41.449 "trtype": "TCP", 00:20:41.449 "adrfam": "IPv4", 00:20:41.449 "traddr": "10.0.0.2", 00:20:41.449 "trsvcid": "4420" 00:20:41.449 }, 00:20:41.449 "secure_channel": true 00:20:41.449 } 00:20:41.449 } 00:20:41.449 ] 00:20:41.449 } 00:20:41.449 ] 00:20:41.449 }' 00:20:41.449 10:46:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=873270 00:20:41.449 10:46:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 873270 00:20:41.449 10:46:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:41.449 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 873270 ']' 00:20:41.449 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.449 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:41.449 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.449 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:41.449 10:46:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.449 [2024-06-10 10:46:05.671555] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:41.449 [2024-06-10 10:46:05.671606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.449 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.709 [2024-06-10 10:46:05.753109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.709 [2024-06-10 10:46:05.805806] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.709 [2024-06-10 10:46:05.805838] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.709 [2024-06-10 10:46:05.805843] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.709 [2024-06-10 10:46:05.805848] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.709 [2024-06-10 10:46:05.805851] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.710 [2024-06-10 10:46:05.805900] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.710 [2024-06-10 10:46:05.989950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.970 [2024-06-10 10:46:06.005922] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:41.970 [2024-06-10 10:46:06.021952] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:41.970 [2024-06-10 10:46:06.021984] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.970 [2024-06-10 10:46:06.037548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=873426 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 873426 /var/tmp/bdevperf.sock 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 873426 ']' 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:42.231 10:46:06 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:42.231 "subsystems": [ 00:20:42.231 { 00:20:42.231 "subsystem": "keyring", 00:20:42.231 "config": [] 00:20:42.231 }, 00:20:42.231 { 00:20:42.231 "subsystem": "iobuf", 00:20:42.231 "config": [ 00:20:42.231 { 00:20:42.231 "method": "iobuf_set_options", 00:20:42.231 "params": { 00:20:42.231 "small_pool_count": 8192, 00:20:42.231 "large_pool_count": 1024, 00:20:42.231 "small_bufsize": 8192, 00:20:42.231 "large_bufsize": 135168 00:20:42.231 } 00:20:42.231 } 00:20:42.231 ] 00:20:42.231 }, 00:20:42.231 { 00:20:42.231 "subsystem": "sock", 00:20:42.231 "config": [ 00:20:42.231 { 00:20:42.231 "method": "sock_set_default_impl", 00:20:42.231 "params": { 00:20:42.231 "impl_name": "posix" 00:20:42.231 } 00:20:42.231 }, 00:20:42.231 { 00:20:42.231 "method": "sock_impl_set_options", 00:20:42.231 "params": { 00:20:42.231 "impl_name": "ssl", 00:20:42.231 "recv_buf_size": 4096, 00:20:42.231 "send_buf_size": 4096, 00:20:42.231 "enable_recv_pipe": true, 00:20:42.231 "enable_quickack": false, 00:20:42.231 "enable_placement_id": 0, 00:20:42.231 "enable_zerocopy_send_server": true, 00:20:42.231 "enable_zerocopy_send_client": false, 00:20:42.231 "zerocopy_threshold": 0, 00:20:42.231 "tls_version": 0, 00:20:42.231 "enable_ktls": false 00:20:42.231 } 00:20:42.231 }, 00:20:42.231 { 00:20:42.231 "method": "sock_impl_set_options", 00:20:42.231 "params": { 00:20:42.231 "impl_name": "posix", 00:20:42.231 "recv_buf_size": 2097152, 00:20:42.231 "send_buf_size": 2097152, 00:20:42.231 "enable_recv_pipe": true, 00:20:42.231 "enable_quickack": false, 00:20:42.231 "enable_placement_id": 0, 00:20:42.231 "enable_zerocopy_send_server": true, 00:20:42.231 "enable_zerocopy_send_client": false, 00:20:42.231 "zerocopy_threshold": 0, 00:20:42.231 "tls_version": 0, 00:20:42.231 "enable_ktls": false 00:20:42.231 } 00:20:42.231 } 00:20:42.231 ] 00:20:42.231 }, 00:20:42.231 { 00:20:42.231 "subsystem": "vmd", 00:20:42.231 "config": [] 00:20:42.231 }, 00:20:42.231 { 00:20:42.231 "subsystem": "accel", 00:20:42.231 "config": [ 00:20:42.231 { 00:20:42.231 "method": "accel_set_options", 00:20:42.231 "params": { 00:20:42.231 "small_cache_size": 128, 00:20:42.231 "large_cache_size": 16, 00:20:42.231 "task_count": 2048, 00:20:42.231 "sequence_count": 2048, 00:20:42.231 "buf_count": 2048 00:20:42.231 } 00:20:42.231 } 00:20:42.231 ] 00:20:42.231 }, 00:20:42.231 { 00:20:42.231 "subsystem": "bdev", 00:20:42.231 "config": [ 00:20:42.231 { 00:20:42.231 "method": "bdev_set_options", 00:20:42.232 "params": { 00:20:42.232 "bdev_io_pool_size": 65535, 00:20:42.232 "bdev_io_cache_size": 256, 00:20:42.232 "bdev_auto_examine": true, 00:20:42.232 "iobuf_small_cache_size": 128, 00:20:42.232 "iobuf_large_cache_size": 16 00:20:42.232 } 00:20:42.232 }, 00:20:42.232 { 00:20:42.232 "method": "bdev_raid_set_options", 00:20:42.232 "params": { 00:20:42.232 "process_window_size_kb": 1024 00:20:42.232 } 00:20:42.232 }, 00:20:42.232 { 00:20:42.232 "method": "bdev_iscsi_set_options", 00:20:42.232 "params": { 00:20:42.232 "timeout_sec": 30 00:20:42.232 } 00:20:42.232 }, 00:20:42.232 { 00:20:42.232 "method": "bdev_nvme_set_options", 00:20:42.232 "params": { 00:20:42.232 "action_on_timeout": "none", 00:20:42.232 "timeout_us": 0, 00:20:42.232 "timeout_admin_us": 0, 00:20:42.232 "keep_alive_timeout_ms": 10000, 00:20:42.232 "arbitration_burst": 0, 00:20:42.232 "low_priority_weight": 0, 00:20:42.232 "medium_priority_weight": 0, 00:20:42.232 "high_priority_weight": 0, 00:20:42.232 "nvme_adminq_poll_period_us": 10000, 00:20:42.232 "nvme_ioq_poll_period_us": 0, 00:20:42.232 "io_queue_requests": 512, 00:20:42.232 "delay_cmd_submit": true, 00:20:42.232 "transport_retry_count": 4, 00:20:42.232 "bdev_retry_count": 3, 00:20:42.232 "transport_ack_timeout": 0, 00:20:42.232 "ctrlr_loss_timeout_sec": 0, 00:20:42.232 "reconnect_delay_sec": 0, 00:20:42.232 "fast_io_fail_timeout_sec": 0, 00:20:42.232 "disable_auto_failback": false, 00:20:42.232 "generate_uuids": false, 00:20:42.232 "transport_tos": 0, 00:20:42.232 "nvme_error_stat": false, 00:20:42.232 "rdma_srq_size": 0, 00:20:42.232 "io_path_stat": false, 00:20:42.232 "allow_accel_sequence": false, 00:20:42.232 "rdma_max_cq_size": 0, 00:20:42.232 "rdma_cm_event_timeout_ms": 0, 00:20:42.232 "dhchap_digests": [ 00:20:42.232 "sha256", 00:20:42.232 "sha384", 00:20:42.232 "sha512" 00:20:42.232 ], 00:20:42.232 "dhchap_dhgroups": [ 00:20:42.232 "null", 00:20:42.232 "ffdhe2048", 00:20:42.232 "ffdhe3072", 00:20:42.232 "ffdhe4096", 00:20:42.232 "ffdhe6144", 00:20:42.232 "ffdhe8192" 00:20:42.232 ] 00:20:42.232 } 00:20:42.232 }, 00:20:42.232 { 00:20:42.232 "method": "bdev_nvme_attach_controller", 00:20:42.232 "params": { 00:20:42.232 "name": "TLSTEST", 00:20:42.232 "trtype": "TCP", 00:20:42.232 "adrfam": "IPv4", 00:20:42.232 "traddr": "10.0.0.2", 00:20:42.232 "trsvcid": "4420", 00:20:42.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.232 "prchk_reftag": false, 00:20:42.232 "prchk_guard": false, 00:20:42.232 "ctrlr_loss_timeout_sec": 0, 00:20:42.232 "reconnect_delay_sec": 0, 00:20:42.232 "fast_io_fail_timeout_sec": 0, 00:20:42.232 "psk": "/tmp/tmp.LLy1V17JdC", 00:20:42.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.232 "hdgst": false, 00:20:42.232 "ddgst": false 00:20:42.232 } 00:20:42.232 }, 00:20:42.232 { 00:20:42.232 "method": "bdev_nvme_set_hotplug", 00:20:42.232 "params": { 00:20:42.232 "period_us": 100000, 00:20:42.232 "enable": false 00:20:42.232 } 00:20:42.232 }, 00:20:42.232 { 00:20:42.232 "method": "bdev_wait_for_examine" 00:20:42.232 } 00:20:42.232 ] 00:20:42.232 }, 00:20:42.232 { 00:20:42.232 "subsystem": "nbd", 00:20:42.232 "config": [] 00:20:42.232 } 00:20:42.232 ] 00:20:42.232 }' 00:20:42.232 10:46:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.232 [2024-06-10 10:46:06.517314] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:42.232 [2024-06-10 10:46:06.517363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873426 ] 00:20:42.493 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.493 [2024-06-10 10:46:06.567520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.493 [2024-06-10 10:46:06.619646] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.493 [2024-06-10 10:46:06.744291] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.494 [2024-06-10 10:46:06.744353] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:43.064 10:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:43.064 10:46:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:43.064 10:46:07 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:43.065 Running I/O for 10 seconds... 00:20:55.299 00:20:55.300 Latency(us) 00:20:55.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.300 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:55.300 Verification LBA range: start 0x0 length 0x2000 00:20:55.300 TLSTESTn1 : 10.02 4915.39 19.20 0.00 0.00 26001.63 4642.13 81701.55 00:20:55.300 =================================================================================================================== 00:20:55.300 Total : 4915.39 19.20 0.00 0.00 26001.63 4642.13 81701.55 00:20:55.300 0 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 873426 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 873426 ']' 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 873426 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 873426 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 873426' 00:20:55.300 killing process with pid 873426 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 873426 00:20:55.300 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.300 00:20:55.300 Latency(us) 00:20:55.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.300 =================================================================================================================== 00:20:55.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.300 [2024-06-10 10:46:17.464193] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 873426 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 873270 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 873270 ']' 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 873270 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 873270 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 873270' 00:20:55.300 killing process with pid 873270 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 873270 00:20:55.300 [2024-06-10 10:46:17.630843] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:55.300 [2024-06-10 10:46:17.630875] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 873270 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=875637 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 875637 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 875637 ']' 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:55.300 10:46:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.300 [2024-06-10 10:46:17.816789] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:55.300 [2024-06-10 10:46:17.816861] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.300 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.300 [2024-06-10 10:46:17.883778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.300 [2024-06-10 10:46:17.948077] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.300 [2024-06-10 10:46:17.948115] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.300 [2024-06-10 10:46:17.948124] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.300 [2024-06-10 10:46:17.948130] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.300 [2024-06-10 10:46:17.948135] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.300 [2024-06-10 10:46:17.948155] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.LLy1V17JdC 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LLy1V17JdC 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.300 [2024-06-10 10:46:18.759079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:55.300 10:46:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:55.300 [2024-06-10 10:46:19.099920] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:55.300 [2024-06-10 10:46:19.099973] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.300 [2024-06-10 10:46:19.100157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.300 10:46:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.300 malloc0 00:20:55.300 10:46:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.300 10:46:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LLy1V17JdC 00:20:55.561 [2024-06-10 10:46:19.591843] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=875996 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 875996 /var/tmp/bdevperf.sock 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 875996 ']' 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:55.561 10:46:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.561 [2024-06-10 10:46:19.661455] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:55.561 [2024-06-10 10:46:19.661505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875996 ] 00:20:55.561 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.561 [2024-06-10 10:46:19.738681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.561 [2024-06-10 10:46:19.792257] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.504 10:46:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:56.504 10:46:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:56.504 10:46:20 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LLy1V17JdC 00:20:56.504 10:46:20 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:56.504 [2024-06-10 10:46:20.718403] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.765 nvme0n1 00:20:56.765 10:46:20 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:56.765 Running I/O for 1 seconds... 00:20:57.708 00:20:57.708 Latency(us) 00:20:57.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.708 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:57.708 Verification LBA range: start 0x0 length 0x2000 00:20:57.708 nvme0n1 : 1.02 5012.08 19.58 0.00 0.00 25356.53 6990.51 33423.36 00:20:57.708 =================================================================================================================== 00:20:57.708 Total : 5012.08 19.58 0.00 0.00 25356.53 6990.51 33423.36 00:20:57.708 0 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 875996 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 875996 ']' 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 875996 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 875996 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 875996' 00:20:57.708 killing process with pid 875996 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 875996 00:20:57.708 Received shutdown signal, test time was about 1.000000 seconds 00:20:57.708 00:20:57.708 Latency(us) 00:20:57.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.708 =================================================================================================================== 00:20:57.708 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.708 10:46:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 875996 00:20:57.969 10:46:22 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 875637 00:20:57.969 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 875637 ']' 00:20:57.969 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 875637 00:20:57.969 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:20:57.969 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:57.969 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 875637 00:20:57.969 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:57.969 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:57.970 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 875637' 00:20:57.970 killing process with pid 875637 00:20:57.970 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 875637 00:20:57.970 [2024-06-10 10:46:22.152154] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:57.970 [2024-06-10 10:46:22.152198] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:57.970 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 875637 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=876602 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 876602 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 876602 ']' 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:58.233 10:46:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.233 [2024-06-10 10:46:22.350925] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:58.233 [2024-06-10 10:46:22.350979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.233 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.233 [2024-06-10 10:46:22.416623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.233 [2024-06-10 10:46:22.480479] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.233 [2024-06-10 10:46:22.480519] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.233 [2024-06-10 10:46:22.480526] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.233 [2024-06-10 10:46:22.480532] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.233 [2024-06-10 10:46:22.480538] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.233 [2024-06-10 10:46:22.480556] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.173 [2024-06-10 10:46:23.167364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.173 malloc0 00:20:59.173 [2024-06-10 10:46:23.194125] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:59.173 [2024-06-10 10:46:23.194176] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.173 [2024-06-10 10:46:23.194362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=876705 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 876705 /var/tmp/bdevperf.sock 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 876705 ']' 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:59.173 10:46:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.173 [2024-06-10 10:46:23.271198] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:20:59.173 [2024-06-10 10:46:23.271249] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876705 ] 00:20:59.173 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.173 [2024-06-10 10:46:23.346355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.173 [2024-06-10 10:46:23.400647] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.744 10:46:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:59.744 10:46:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:20:59.744 10:46:24 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LLy1V17JdC 00:21:00.004 10:46:24 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:00.267 [2024-06-10 10:46:24.323160] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.267 nvme0n1 00:21:00.267 10:46:24 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.267 Running I/O for 1 seconds... 00:21:01.305 00:21:01.305 Latency(us) 00:21:01.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.305 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:01.305 Verification LBA range: start 0x0 length 0x2000 00:21:01.305 nvme0n1 : 1.02 2576.02 10.06 0.00 0.00 49238.42 5215.57 109663.57 00:21:01.305 =================================================================================================================== 00:21:01.305 Total : 2576.02 10.06 0.00 0.00 49238.42 5215.57 109663.57 00:21:01.305 0 00:21:01.305 10:46:25 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:01.305 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.305 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.566 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.566 10:46:25 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:01.566 "subsystems": [ 00:21:01.566 { 00:21:01.566 "subsystem": "keyring", 00:21:01.566 "config": [ 00:21:01.566 { 00:21:01.566 "method": "keyring_file_add_key", 00:21:01.566 "params": { 00:21:01.566 "name": "key0", 00:21:01.566 "path": "/tmp/tmp.LLy1V17JdC" 00:21:01.566 } 00:21:01.566 } 00:21:01.566 ] 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "subsystem": "iobuf", 00:21:01.566 "config": [ 00:21:01.566 { 00:21:01.566 "method": "iobuf_set_options", 00:21:01.566 "params": { 00:21:01.566 "small_pool_count": 8192, 00:21:01.566 "large_pool_count": 1024, 00:21:01.566 "small_bufsize": 8192, 00:21:01.566 "large_bufsize": 135168 00:21:01.566 } 00:21:01.566 } 00:21:01.566 ] 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "subsystem": "sock", 00:21:01.566 "config": [ 00:21:01.566 { 00:21:01.566 "method": "sock_set_default_impl", 00:21:01.566 "params": { 00:21:01.566 "impl_name": "posix" 00:21:01.566 } 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "method": "sock_impl_set_options", 00:21:01.566 "params": { 00:21:01.566 "impl_name": "ssl", 00:21:01.566 "recv_buf_size": 4096, 00:21:01.566 "send_buf_size": 4096, 00:21:01.566 "enable_recv_pipe": true, 00:21:01.566 "enable_quickack": false, 00:21:01.566 "enable_placement_id": 0, 00:21:01.566 "enable_zerocopy_send_server": true, 00:21:01.566 "enable_zerocopy_send_client": false, 00:21:01.566 "zerocopy_threshold": 0, 00:21:01.566 "tls_version": 0, 00:21:01.566 "enable_ktls": false 00:21:01.566 } 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "method": "sock_impl_set_options", 00:21:01.566 "params": { 00:21:01.566 "impl_name": "posix", 00:21:01.566 "recv_buf_size": 2097152, 00:21:01.566 "send_buf_size": 2097152, 00:21:01.566 "enable_recv_pipe": true, 00:21:01.566 "enable_quickack": false, 00:21:01.566 "enable_placement_id": 0, 00:21:01.566 "enable_zerocopy_send_server": true, 00:21:01.566 "enable_zerocopy_send_client": false, 00:21:01.566 "zerocopy_threshold": 0, 00:21:01.566 "tls_version": 0, 00:21:01.566 "enable_ktls": false 00:21:01.566 } 00:21:01.566 } 00:21:01.566 ] 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "subsystem": "vmd", 00:21:01.566 "config": [] 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "subsystem": "accel", 00:21:01.566 "config": [ 00:21:01.566 { 00:21:01.566 "method": "accel_set_options", 00:21:01.566 "params": { 00:21:01.566 "small_cache_size": 128, 00:21:01.566 "large_cache_size": 16, 00:21:01.566 "task_count": 2048, 00:21:01.566 "sequence_count": 2048, 00:21:01.566 "buf_count": 2048 00:21:01.566 } 00:21:01.566 } 00:21:01.566 ] 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "subsystem": "bdev", 00:21:01.566 "config": [ 00:21:01.566 { 00:21:01.566 "method": "bdev_set_options", 00:21:01.566 "params": { 00:21:01.566 "bdev_io_pool_size": 65535, 00:21:01.566 "bdev_io_cache_size": 256, 00:21:01.566 "bdev_auto_examine": true, 00:21:01.566 "iobuf_small_cache_size": 128, 00:21:01.566 "iobuf_large_cache_size": 16 00:21:01.566 } 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "method": "bdev_raid_set_options", 00:21:01.566 "params": { 00:21:01.566 "process_window_size_kb": 1024 00:21:01.566 } 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "method": "bdev_iscsi_set_options", 00:21:01.566 "params": { 00:21:01.566 "timeout_sec": 30 00:21:01.566 } 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "method": "bdev_nvme_set_options", 00:21:01.566 "params": { 00:21:01.566 "action_on_timeout": "none", 00:21:01.566 "timeout_us": 0, 00:21:01.566 "timeout_admin_us": 0, 00:21:01.566 "keep_alive_timeout_ms": 10000, 00:21:01.566 "arbitration_burst": 0, 00:21:01.566 "low_priority_weight": 0, 00:21:01.566 "medium_priority_weight": 0, 00:21:01.566 "high_priority_weight": 0, 00:21:01.566 "nvme_adminq_poll_period_us": 10000, 00:21:01.566 "nvme_ioq_poll_period_us": 0, 00:21:01.566 "io_queue_requests": 0, 00:21:01.566 "delay_cmd_submit": true, 00:21:01.566 "transport_retry_count": 4, 00:21:01.566 "bdev_retry_count": 3, 00:21:01.566 "transport_ack_timeout": 0, 00:21:01.566 "ctrlr_loss_timeout_sec": 0, 00:21:01.566 "reconnect_delay_sec": 0, 00:21:01.566 "fast_io_fail_timeout_sec": 0, 00:21:01.566 "disable_auto_failback": false, 00:21:01.566 "generate_uuids": false, 00:21:01.566 "transport_tos": 0, 00:21:01.566 "nvme_error_stat": false, 00:21:01.566 "rdma_srq_size": 0, 00:21:01.566 "io_path_stat": false, 00:21:01.566 "allow_accel_sequence": false, 00:21:01.566 "rdma_max_cq_size": 0, 00:21:01.566 "rdma_cm_event_timeout_ms": 0, 00:21:01.566 "dhchap_digests": [ 00:21:01.566 "sha256", 00:21:01.566 "sha384", 00:21:01.566 "sha512" 00:21:01.566 ], 00:21:01.566 "dhchap_dhgroups": [ 00:21:01.566 "null", 00:21:01.566 "ffdhe2048", 00:21:01.566 "ffdhe3072", 00:21:01.566 "ffdhe4096", 00:21:01.566 "ffdhe6144", 00:21:01.566 "ffdhe8192" 00:21:01.566 ] 00:21:01.566 } 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "method": "bdev_nvme_set_hotplug", 00:21:01.566 "params": { 00:21:01.566 "period_us": 100000, 00:21:01.566 "enable": false 00:21:01.566 } 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "method": "bdev_malloc_create", 00:21:01.566 "params": { 00:21:01.566 "name": "malloc0", 00:21:01.566 "num_blocks": 8192, 00:21:01.566 "block_size": 4096, 00:21:01.566 "physical_block_size": 4096, 00:21:01.566 "uuid": "22333c52-66cf-444c-80e1-e06583c94344", 00:21:01.566 "optimal_io_boundary": 0 00:21:01.566 } 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "method": "bdev_wait_for_examine" 00:21:01.566 } 00:21:01.566 ] 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "subsystem": "nbd", 00:21:01.566 "config": [] 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "subsystem": "scheduler", 00:21:01.566 "config": [ 00:21:01.566 { 00:21:01.566 "method": "framework_set_scheduler", 00:21:01.566 "params": { 00:21:01.566 "name": "static" 00:21:01.566 } 00:21:01.566 } 00:21:01.566 ] 00:21:01.566 }, 00:21:01.566 { 00:21:01.566 "subsystem": "nvmf", 00:21:01.566 "config": [ 00:21:01.566 { 00:21:01.566 "method": "nvmf_set_config", 00:21:01.566 "params": { 00:21:01.566 "discovery_filter": "match_any", 00:21:01.566 "admin_cmd_passthru": { 00:21:01.566 "identify_ctrlr": false 00:21:01.567 } 00:21:01.567 } 00:21:01.567 }, 00:21:01.567 { 00:21:01.567 "method": "nvmf_set_max_subsystems", 00:21:01.567 "params": { 00:21:01.567 "max_subsystems": 1024 00:21:01.567 } 00:21:01.567 }, 00:21:01.567 { 00:21:01.567 "method": "nvmf_set_crdt", 00:21:01.567 "params": { 00:21:01.567 "crdt1": 0, 00:21:01.567 "crdt2": 0, 00:21:01.567 "crdt3": 0 00:21:01.567 } 00:21:01.567 }, 00:21:01.567 { 00:21:01.567 "method": "nvmf_create_transport", 00:21:01.567 "params": { 00:21:01.567 "trtype": "TCP", 00:21:01.567 "max_queue_depth": 128, 00:21:01.567 "max_io_qpairs_per_ctrlr": 127, 00:21:01.567 "in_capsule_data_size": 4096, 00:21:01.567 "max_io_size": 131072, 00:21:01.567 "io_unit_size": 131072, 00:21:01.567 "max_aq_depth": 128, 00:21:01.567 "num_shared_buffers": 511, 00:21:01.567 "buf_cache_size": 4294967295, 00:21:01.567 "dif_insert_or_strip": false, 00:21:01.567 "zcopy": false, 00:21:01.567 "c2h_success": false, 00:21:01.567 "sock_priority": 0, 00:21:01.567 "abort_timeout_sec": 1, 00:21:01.567 "ack_timeout": 0, 00:21:01.567 "data_wr_pool_size": 0 00:21:01.567 } 00:21:01.567 }, 00:21:01.567 { 00:21:01.567 "method": "nvmf_create_subsystem", 00:21:01.567 "params": { 00:21:01.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.567 "allow_any_host": false, 00:21:01.567 "serial_number": "00000000000000000000", 00:21:01.567 "model_number": "SPDK bdev Controller", 00:21:01.567 "max_namespaces": 32, 00:21:01.567 "min_cntlid": 1, 00:21:01.567 "max_cntlid": 65519, 00:21:01.567 "ana_reporting": false 00:21:01.567 } 00:21:01.567 }, 00:21:01.567 { 00:21:01.567 "method": "nvmf_subsystem_add_host", 00:21:01.567 "params": { 00:21:01.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.567 "host": "nqn.2016-06.io.spdk:host1", 00:21:01.567 "psk": "key0" 00:21:01.567 } 00:21:01.567 }, 00:21:01.567 { 00:21:01.567 "method": "nvmf_subsystem_add_ns", 00:21:01.567 "params": { 00:21:01.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.567 "namespace": { 00:21:01.567 "nsid": 1, 00:21:01.567 "bdev_name": "malloc0", 00:21:01.567 "nguid": "22333C5266CF444C80E1E06583C94344", 00:21:01.567 "uuid": "22333c52-66cf-444c-80e1-e06583c94344", 00:21:01.567 "no_auto_visible": false 00:21:01.567 } 00:21:01.567 } 00:21:01.567 }, 00:21:01.567 { 00:21:01.567 "method": "nvmf_subsystem_add_listener", 00:21:01.567 "params": { 00:21:01.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.567 "listen_address": { 00:21:01.567 "trtype": "TCP", 00:21:01.567 "adrfam": "IPv4", 00:21:01.567 "traddr": "10.0.0.2", 00:21:01.567 "trsvcid": "4420" 00:21:01.567 }, 00:21:01.567 "secure_channel": true 00:21:01.567 } 00:21:01.567 } 00:21:01.567 ] 00:21:01.567 } 00:21:01.567 ] 00:21:01.567 }' 00:21:01.567 10:46:25 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:01.827 10:46:25 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:01.827 "subsystems": [ 00:21:01.827 { 00:21:01.827 "subsystem": "keyring", 00:21:01.827 "config": [ 00:21:01.827 { 00:21:01.827 "method": "keyring_file_add_key", 00:21:01.827 "params": { 00:21:01.827 "name": "key0", 00:21:01.827 "path": "/tmp/tmp.LLy1V17JdC" 00:21:01.827 } 00:21:01.827 } 00:21:01.827 ] 00:21:01.827 }, 00:21:01.827 { 00:21:01.827 "subsystem": "iobuf", 00:21:01.827 "config": [ 00:21:01.827 { 00:21:01.827 "method": "iobuf_set_options", 00:21:01.827 "params": { 00:21:01.827 "small_pool_count": 8192, 00:21:01.827 "large_pool_count": 1024, 00:21:01.827 "small_bufsize": 8192, 00:21:01.827 "large_bufsize": 135168 00:21:01.827 } 00:21:01.827 } 00:21:01.827 ] 00:21:01.827 }, 00:21:01.827 { 00:21:01.827 "subsystem": "sock", 00:21:01.827 "config": [ 00:21:01.827 { 00:21:01.827 "method": "sock_set_default_impl", 00:21:01.827 "params": { 00:21:01.827 "impl_name": "posix" 00:21:01.827 } 00:21:01.827 }, 00:21:01.827 { 00:21:01.827 "method": "sock_impl_set_options", 00:21:01.827 "params": { 00:21:01.828 "impl_name": "ssl", 00:21:01.828 "recv_buf_size": 4096, 00:21:01.828 "send_buf_size": 4096, 00:21:01.828 "enable_recv_pipe": true, 00:21:01.828 "enable_quickack": false, 00:21:01.828 "enable_placement_id": 0, 00:21:01.828 "enable_zerocopy_send_server": true, 00:21:01.828 "enable_zerocopy_send_client": false, 00:21:01.828 "zerocopy_threshold": 0, 00:21:01.828 "tls_version": 0, 00:21:01.828 "enable_ktls": false 00:21:01.828 } 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "method": "sock_impl_set_options", 00:21:01.828 "params": { 00:21:01.828 "impl_name": "posix", 00:21:01.828 "recv_buf_size": 2097152, 00:21:01.828 "send_buf_size": 2097152, 00:21:01.828 "enable_recv_pipe": true, 00:21:01.828 "enable_quickack": false, 00:21:01.828 "enable_placement_id": 0, 00:21:01.828 "enable_zerocopy_send_server": true, 00:21:01.828 "enable_zerocopy_send_client": false, 00:21:01.828 "zerocopy_threshold": 0, 00:21:01.828 "tls_version": 0, 00:21:01.828 "enable_ktls": false 00:21:01.828 } 00:21:01.828 } 00:21:01.828 ] 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "subsystem": "vmd", 00:21:01.828 "config": [] 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "subsystem": "accel", 00:21:01.828 "config": [ 00:21:01.828 { 00:21:01.828 "method": "accel_set_options", 00:21:01.828 "params": { 00:21:01.828 "small_cache_size": 128, 00:21:01.828 "large_cache_size": 16, 00:21:01.828 "task_count": 2048, 00:21:01.828 "sequence_count": 2048, 00:21:01.828 "buf_count": 2048 00:21:01.828 } 00:21:01.828 } 00:21:01.828 ] 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "subsystem": "bdev", 00:21:01.828 "config": [ 00:21:01.828 { 00:21:01.828 "method": "bdev_set_options", 00:21:01.828 "params": { 00:21:01.828 "bdev_io_pool_size": 65535, 00:21:01.828 "bdev_io_cache_size": 256, 00:21:01.828 "bdev_auto_examine": true, 00:21:01.828 "iobuf_small_cache_size": 128, 00:21:01.828 "iobuf_large_cache_size": 16 00:21:01.828 } 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "method": "bdev_raid_set_options", 00:21:01.828 "params": { 00:21:01.828 "process_window_size_kb": 1024 00:21:01.828 } 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "method": "bdev_iscsi_set_options", 00:21:01.828 "params": { 00:21:01.828 "timeout_sec": 30 00:21:01.828 } 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "method": "bdev_nvme_set_options", 00:21:01.828 "params": { 00:21:01.828 "action_on_timeout": "none", 00:21:01.828 "timeout_us": 0, 00:21:01.828 "timeout_admin_us": 0, 00:21:01.828 "keep_alive_timeout_ms": 10000, 00:21:01.828 "arbitration_burst": 0, 00:21:01.828 "low_priority_weight": 0, 00:21:01.828 "medium_priority_weight": 0, 00:21:01.828 "high_priority_weight": 0, 00:21:01.828 "nvme_adminq_poll_period_us": 10000, 00:21:01.828 "nvme_ioq_poll_period_us": 0, 00:21:01.828 "io_queue_requests": 512, 00:21:01.828 "delay_cmd_submit": true, 00:21:01.828 "transport_retry_count": 4, 00:21:01.828 "bdev_retry_count": 3, 00:21:01.828 "transport_ack_timeout": 0, 00:21:01.828 "ctrlr_loss_timeout_sec": 0, 00:21:01.828 "reconnect_delay_sec": 0, 00:21:01.828 "fast_io_fail_timeout_sec": 0, 00:21:01.828 "disable_auto_failback": false, 00:21:01.828 "generate_uuids": false, 00:21:01.828 "transport_tos": 0, 00:21:01.828 "nvme_error_stat": false, 00:21:01.828 "rdma_srq_size": 0, 00:21:01.828 "io_path_stat": false, 00:21:01.828 "allow_accel_sequence": false, 00:21:01.828 "rdma_max_cq_size": 0, 00:21:01.828 "rdma_cm_event_timeout_ms": 0, 00:21:01.828 "dhchap_digests": [ 00:21:01.828 "sha256", 00:21:01.828 "sha384", 00:21:01.828 "sha512" 00:21:01.828 ], 00:21:01.828 "dhchap_dhgroups": [ 00:21:01.828 "null", 00:21:01.828 "ffdhe2048", 00:21:01.828 "ffdhe3072", 00:21:01.828 "ffdhe4096", 00:21:01.828 "ffdhe6144", 00:21:01.828 "ffdhe8192" 00:21:01.828 ] 00:21:01.828 } 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "method": "bdev_nvme_attach_controller", 00:21:01.828 "params": { 00:21:01.828 "name": "nvme0", 00:21:01.828 "trtype": "TCP", 00:21:01.828 "adrfam": "IPv4", 00:21:01.828 "traddr": "10.0.0.2", 00:21:01.828 "trsvcid": "4420", 00:21:01.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.828 "prchk_reftag": false, 00:21:01.828 "prchk_guard": false, 00:21:01.828 "ctrlr_loss_timeout_sec": 0, 00:21:01.828 "reconnect_delay_sec": 0, 00:21:01.828 "fast_io_fail_timeout_sec": 0, 00:21:01.828 "psk": "key0", 00:21:01.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.828 "hdgst": false, 00:21:01.828 "ddgst": false 00:21:01.828 } 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "method": "bdev_nvme_set_hotplug", 00:21:01.828 "params": { 00:21:01.828 "period_us": 100000, 00:21:01.828 "enable": false 00:21:01.828 } 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "method": "bdev_enable_histogram", 00:21:01.828 "params": { 00:21:01.828 "name": "nvme0n1", 00:21:01.828 "enable": true 00:21:01.828 } 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "method": "bdev_wait_for_examine" 00:21:01.828 } 00:21:01.828 ] 00:21:01.828 }, 00:21:01.828 { 00:21:01.828 "subsystem": "nbd", 00:21:01.828 "config": [] 00:21:01.828 } 00:21:01.828 ] 00:21:01.828 }' 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 876705 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 876705 ']' 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 876705 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 876705 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 876705' 00:21:01.828 killing process with pid 876705 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 876705 00:21:01.828 Received shutdown signal, test time was about 1.000000 seconds 00:21:01.828 00:21:01.828 Latency(us) 00:21:01.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.828 =================================================================================================================== 00:21:01.828 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.828 10:46:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 876705 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 876602 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 876602 ']' 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 876602 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 876602 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 876602' 00:21:01.828 killing process with pid 876602 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 876602 00:21:01.828 [2024-06-10 10:46:26.112650] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:01.828 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 876602 00:21:02.089 10:46:26 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:02.089 10:46:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.089 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:02.089 10:46:26 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:02.089 "subsystems": [ 00:21:02.089 { 00:21:02.089 "subsystem": "keyring", 00:21:02.089 "config": [ 00:21:02.089 { 00:21:02.089 "method": "keyring_file_add_key", 00:21:02.089 "params": { 00:21:02.089 "name": "key0", 00:21:02.089 "path": "/tmp/tmp.LLy1V17JdC" 00:21:02.089 } 00:21:02.089 } 00:21:02.089 ] 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "subsystem": "iobuf", 00:21:02.089 "config": [ 00:21:02.089 { 00:21:02.089 "method": "iobuf_set_options", 00:21:02.089 "params": { 00:21:02.089 "small_pool_count": 8192, 00:21:02.089 "large_pool_count": 1024, 00:21:02.089 "small_bufsize": 8192, 00:21:02.089 "large_bufsize": 135168 00:21:02.089 } 00:21:02.089 } 00:21:02.089 ] 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "subsystem": "sock", 00:21:02.089 "config": [ 00:21:02.089 { 00:21:02.089 "method": "sock_set_default_impl", 00:21:02.089 "params": { 00:21:02.089 "impl_name": "posix" 00:21:02.089 } 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "method": "sock_impl_set_options", 00:21:02.089 "params": { 00:21:02.089 "impl_name": "ssl", 00:21:02.089 "recv_buf_size": 4096, 00:21:02.089 "send_buf_size": 4096, 00:21:02.089 "enable_recv_pipe": true, 00:21:02.089 "enable_quickack": false, 00:21:02.089 "enable_placement_id": 0, 00:21:02.089 "enable_zerocopy_send_server": true, 00:21:02.089 "enable_zerocopy_send_client": false, 00:21:02.089 "zerocopy_threshold": 0, 00:21:02.089 "tls_version": 0, 00:21:02.089 "enable_ktls": false 00:21:02.089 } 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "method": "sock_impl_set_options", 00:21:02.089 "params": { 00:21:02.089 "impl_name": "posix", 00:21:02.089 "recv_buf_size": 2097152, 00:21:02.089 "send_buf_size": 2097152, 00:21:02.089 "enable_recv_pipe": true, 00:21:02.089 "enable_quickack": false, 00:21:02.089 "enable_placement_id": 0, 00:21:02.089 "enable_zerocopy_send_server": true, 00:21:02.089 "enable_zerocopy_send_client": false, 00:21:02.089 "zerocopy_threshold": 0, 00:21:02.089 "tls_version": 0, 00:21:02.089 "enable_ktls": false 00:21:02.089 } 00:21:02.089 } 00:21:02.089 ] 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "subsystem": "vmd", 00:21:02.089 "config": [] 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "subsystem": "accel", 00:21:02.089 "config": [ 00:21:02.089 { 00:21:02.089 "method": "accel_set_options", 00:21:02.089 "params": { 00:21:02.089 "small_cache_size": 128, 00:21:02.089 "large_cache_size": 16, 00:21:02.089 "task_count": 2048, 00:21:02.089 "sequence_count": 2048, 00:21:02.089 "buf_count": 2048 00:21:02.089 } 00:21:02.089 } 00:21:02.089 ] 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "subsystem": "bdev", 00:21:02.089 "config": [ 00:21:02.089 { 00:21:02.089 "method": "bdev_set_options", 00:21:02.089 "params": { 00:21:02.089 "bdev_io_pool_size": 65535, 00:21:02.089 "bdev_io_cache_size": 256, 00:21:02.089 "bdev_auto_examine": true, 00:21:02.089 "iobuf_small_cache_size": 128, 00:21:02.089 "iobuf_large_cache_size": 16 00:21:02.089 } 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "method": "bdev_raid_set_options", 00:21:02.089 "params": { 00:21:02.089 "process_window_size_kb": 1024 00:21:02.089 } 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "method": "bdev_iscsi_set_options", 00:21:02.089 "params": { 00:21:02.089 "timeout_sec": 30 00:21:02.089 } 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "method": "bdev_nvme_set_options", 00:21:02.089 "params": { 00:21:02.089 "action_on_timeout": "none", 00:21:02.089 "timeout_us": 0, 00:21:02.089 "timeout_admin_us": 0, 00:21:02.089 "keep_alive_timeout_ms": 10000, 00:21:02.089 "arbitration_burst": 0, 00:21:02.089 "low_priority_weight": 0, 00:21:02.089 "medium_priority_weight": 0, 00:21:02.089 "high_priority_weight": 0, 00:21:02.089 "nvme_adminq_poll_period_us": 10000, 00:21:02.089 "nvme_ioq_poll_period_us": 0, 00:21:02.089 "io_queue_requests": 0, 00:21:02.089 "delay_cmd_submit": true, 00:21:02.089 "transport_retry_count": 4, 00:21:02.089 "bdev_retry_count": 3, 00:21:02.089 "transport_ack_timeout": 0, 00:21:02.089 "ctrlr_loss_timeout_sec": 0, 00:21:02.089 "reconnect_delay_sec": 0, 00:21:02.089 "fast_io_fail_timeout_sec": 0, 00:21:02.089 "disable_auto_failback": false, 00:21:02.089 "generate_uuids": false, 00:21:02.089 "transport_tos": 0, 00:21:02.089 "nvme_error_stat": false, 00:21:02.089 "rdma_srq_size": 0, 00:21:02.089 "io_path_stat": false, 00:21:02.089 "allow_accel_sequence": false, 00:21:02.089 "rdma_max_cq_size": 0, 00:21:02.089 "rdma_cm_event_timeout_ms": 0, 00:21:02.089 "dhchap_digests": [ 00:21:02.089 "sha256", 00:21:02.089 "sha384", 00:21:02.089 "sha512" 00:21:02.089 ], 00:21:02.089 "dhchap_dhgroups": [ 00:21:02.089 "null", 00:21:02.089 "ffdhe2048", 00:21:02.089 "ffdhe3072", 00:21:02.089 "ffdhe4096", 00:21:02.089 "ffdhe6144", 00:21:02.089 "ffdhe8192" 00:21:02.089 ] 00:21:02.089 } 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "method": "bdev_nvme_set_hotplug", 00:21:02.089 "params": { 00:21:02.089 "period_us": 100000, 00:21:02.089 "enable": false 00:21:02.089 } 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "method": "bdev_malloc_create", 00:21:02.089 "params": { 00:21:02.089 "name": "malloc0", 00:21:02.089 "num_blocks": 8192, 00:21:02.089 "block_size": 4096, 00:21:02.089 "physical_block_size": 4096, 00:21:02.089 "uuid": "22333c52-66cf-444c-80e1-e06583c94344", 00:21:02.089 "optimal_io_boundary": 0 00:21:02.089 } 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "method": "bdev_wait_for_examine" 00:21:02.089 } 00:21:02.089 ] 00:21:02.089 }, 00:21:02.089 { 00:21:02.089 "subsystem": "nbd", 00:21:02.089 "config": [] 00:21:02.090 }, 00:21:02.090 { 00:21:02.090 "subsystem": "scheduler", 00:21:02.090 "config": [ 00:21:02.090 { 00:21:02.090 "method": "framework_set_scheduler", 00:21:02.090 "params": { 00:21:02.090 "name": "static" 00:21:02.090 } 00:21:02.090 } 00:21:02.090 ] 00:21:02.090 }, 00:21:02.090 { 00:21:02.090 "subsystem": "nvmf", 00:21:02.090 "config": [ 00:21:02.090 { 00:21:02.090 "method": "nvmf_set_config", 00:21:02.090 "params": { 00:21:02.090 "discovery_filter": "match_any", 00:21:02.090 "admin_cmd_passthru": { 00:21:02.090 "identify_ctrlr": false 00:21:02.090 } 00:21:02.090 } 00:21:02.090 }, 00:21:02.090 { 00:21:02.090 "method": "nvmf_set_max_subsystems", 00:21:02.090 "params": { 00:21:02.090 "max_subsystems": 1024 00:21:02.090 } 00:21:02.090 }, 00:21:02.090 { 00:21:02.090 "method": "nvmf_set_crdt", 00:21:02.090 "params": { 00:21:02.090 "crdt1": 0, 00:21:02.090 "crdt2": 0, 00:21:02.090 "crdt3": 0 00:21:02.090 } 00:21:02.090 }, 00:21:02.090 { 00:21:02.090 "method": "nvmf_create_transport", 00:21:02.090 "params": { 00:21:02.090 "trtype": "TCP", 00:21:02.090 "max_queue_depth": 128, 00:21:02.090 "max_io_qpairs_per_ctrlr": 127, 00:21:02.090 "in_capsule_data_size": 4096, 00:21:02.090 "max_io_size": 131072, 00:21:02.090 "io_unit_size": 131072, 00:21:02.090 "max_aq_depth": 128, 00:21:02.090 "num_shared_buffers": 511, 00:21:02.090 "buf_cache_size": 4294967295, 00:21:02.090 "dif_insert_or_strip": false, 00:21:02.090 "zcopy": false, 00:21:02.090 "c2h_success": false, 00:21:02.090 "sock_priority": 0, 00:21:02.090 "abort_timeout_sec": 1, 00:21:02.090 "ack_timeout": 0, 00:21:02.090 "data_wr_pool_size": 0 00:21:02.090 } 00:21:02.090 }, 00:21:02.090 { 00:21:02.090 "method": "nvmf_create_subsystem", 00:21:02.090 "params": { 00:21:02.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.090 "allow_any_host": false, 00:21:02.090 "serial_number": "00000000000000000000", 00:21:02.090 "model_number": "SPDK bdev Controller", 00:21:02.090 "max_namespaces": 32, 00:21:02.090 "min_cntlid": 1, 00:21:02.090 "max_cntlid": 65519, 00:21:02.090 "ana_reporting": false 00:21:02.090 } 00:21:02.090 }, 00:21:02.090 { 00:21:02.090 "method": "nvmf_subsystem_add_host", 00:21:02.090 "params": { 00:21:02.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.090 "host": "nqn.2016-06.io.spdk:host1", 00:21:02.090 "psk": "key0" 00:21:02.090 } 00:21:02.090 }, 00:21:02.090 { 00:21:02.090 "method": "nvmf_subsystem_add_ns", 00:21:02.090 "params": { 00:21:02.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.090 "namespace": { 00:21:02.090 "nsid": 1, 00:21:02.090 "bdev_name": "malloc0", 00:21:02.090 "nguid": "22333C5266CF444C80E1E06583C94344", 00:21:02.090 "uuid": "22333c52-66cf-444c-80e1-e06583c94344", 00:21:02.090 "no_auto_visible": false 00:21:02.090 } 00:21:02.090 } 00:21:02.090 }, 00:21:02.090 { 00:21:02.090 "method": "nvmf_subsystem_add_listener", 00:21:02.090 "params": { 00:21:02.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.090 "listen_address": { 00:21:02.090 "trtype": "TCP", 00:21:02.090 "adrfam": "IPv4", 00:21:02.090 "traddr": "10.0.0.2", 00:21:02.090 "trsvcid": "4420" 00:21:02.090 }, 00:21:02.090 "secure_channel": true 00:21:02.090 } 00:21:02.090 } 00:21:02.090 ] 00:21:02.090 } 00:21:02.090 ] 00:21:02.090 }' 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=877393 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 877393 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 877393 ']' 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:02.090 10:46:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.090 [2024-06-10 10:46:26.319479] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:21:02.090 [2024-06-10 10:46:26.319531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.090 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.350 [2024-06-10 10:46:26.384015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.350 [2024-06-10 10:46:26.447820] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.350 [2024-06-10 10:46:26.447855] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.350 [2024-06-10 10:46:26.447863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.350 [2024-06-10 10:46:26.447870] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.350 [2024-06-10 10:46:26.447876] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.350 [2024-06-10 10:46:26.447933] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.610 [2024-06-10 10:46:26.644921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.610 [2024-06-10 10:46:26.676902] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:02.610 [2024-06-10 10:46:26.676946] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.610 [2024-06-10 10:46:26.687636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=877501 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 877501 /var/tmp/bdevperf.sock 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 877501 ']' 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.872 10:46:27 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:02.872 "subsystems": [ 00:21:02.872 { 00:21:02.872 "subsystem": "keyring", 00:21:02.872 "config": [ 00:21:02.872 { 00:21:02.872 "method": "keyring_file_add_key", 00:21:02.872 "params": { 00:21:02.872 "name": "key0", 00:21:02.872 "path": "/tmp/tmp.LLy1V17JdC" 00:21:02.872 } 00:21:02.872 } 00:21:02.872 ] 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "subsystem": "iobuf", 00:21:02.872 "config": [ 00:21:02.872 { 00:21:02.872 "method": "iobuf_set_options", 00:21:02.872 "params": { 00:21:02.872 "small_pool_count": 8192, 00:21:02.872 "large_pool_count": 1024, 00:21:02.872 "small_bufsize": 8192, 00:21:02.872 "large_bufsize": 135168 00:21:02.872 } 00:21:02.872 } 00:21:02.872 ] 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "subsystem": "sock", 00:21:02.872 "config": [ 00:21:02.872 { 00:21:02.872 "method": "sock_set_default_impl", 00:21:02.872 "params": { 00:21:02.872 "impl_name": "posix" 00:21:02.872 } 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "method": "sock_impl_set_options", 00:21:02.872 "params": { 00:21:02.872 "impl_name": "ssl", 00:21:02.872 "recv_buf_size": 4096, 00:21:02.872 "send_buf_size": 4096, 00:21:02.872 "enable_recv_pipe": true, 00:21:02.872 "enable_quickack": false, 00:21:02.872 "enable_placement_id": 0, 00:21:02.872 "enable_zerocopy_send_server": true, 00:21:02.872 "enable_zerocopy_send_client": false, 00:21:02.872 "zerocopy_threshold": 0, 00:21:02.872 "tls_version": 0, 00:21:02.872 "enable_ktls": false 00:21:02.872 } 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "method": "sock_impl_set_options", 00:21:02.872 "params": { 00:21:02.872 "impl_name": "posix", 00:21:02.872 "recv_buf_size": 2097152, 00:21:02.872 "send_buf_size": 2097152, 00:21:02.872 "enable_recv_pipe": true, 00:21:02.872 "enable_quickack": false, 00:21:02.872 "enable_placement_id": 0, 00:21:02.872 "enable_zerocopy_send_server": true, 00:21:02.872 "enable_zerocopy_send_client": false, 00:21:02.872 "zerocopy_threshold": 0, 00:21:02.872 "tls_version": 0, 00:21:02.872 "enable_ktls": false 00:21:02.872 } 00:21:02.872 } 00:21:02.872 ] 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "subsystem": "vmd", 00:21:02.872 "config": [] 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "subsystem": "accel", 00:21:02.872 "config": [ 00:21:02.872 { 00:21:02.872 "method": "accel_set_options", 00:21:02.872 "params": { 00:21:02.872 "small_cache_size": 128, 00:21:02.872 "large_cache_size": 16, 00:21:02.872 "task_count": 2048, 00:21:02.872 "sequence_count": 2048, 00:21:02.872 "buf_count": 2048 00:21:02.872 } 00:21:02.872 } 00:21:02.872 ] 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "subsystem": "bdev", 00:21:02.872 "config": [ 00:21:02.872 { 00:21:02.872 "method": "bdev_set_options", 00:21:02.872 "params": { 00:21:02.872 "bdev_io_pool_size": 65535, 00:21:02.872 "bdev_io_cache_size": 256, 00:21:02.872 "bdev_auto_examine": true, 00:21:02.872 "iobuf_small_cache_size": 128, 00:21:02.872 "iobuf_large_cache_size": 16 00:21:02.872 } 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "method": "bdev_raid_set_options", 00:21:02.872 "params": { 00:21:02.872 "process_window_size_kb": 1024 00:21:02.872 } 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "method": "bdev_iscsi_set_options", 00:21:02.872 "params": { 00:21:02.872 "timeout_sec": 30 00:21:02.872 } 00:21:02.872 }, 00:21:02.872 { 00:21:02.872 "method": "bdev_nvme_set_options", 00:21:02.872 "params": { 00:21:02.872 "action_on_timeout": "none", 00:21:02.872 "timeout_us": 0, 00:21:02.872 "timeout_admin_us": 0, 00:21:02.872 "keep_alive_timeout_ms": 10000, 00:21:02.872 "arbitration_burst": 0, 00:21:02.872 "low_priority_weight": 0, 00:21:02.872 "medium_priority_weight": 0, 00:21:02.872 "high_priority_weight": 0, 00:21:02.872 "nvme_adminq_poll_period_us": 10000, 00:21:02.872 "nvme_ioq_poll_period_us": 0, 00:21:02.872 "io_queue_requests": 512, 00:21:02.872 "delay_cmd_submit": true, 00:21:02.872 "transport_retry_count": 4, 00:21:02.872 "bdev_retry_count": 3, 00:21:02.872 "transport_ack_timeout": 0, 00:21:02.872 "ctrlr_loss_timeout_sec": 0, 00:21:02.872 "reconnect_delay_sec": 0, 00:21:02.872 "fast_io_fail_timeout_sec": 0, 00:21:02.873 "disable_auto_failback": false, 00:21:02.873 "generate_uuids": false, 00:21:02.873 "transport_tos": 0, 00:21:02.873 "nvme_error_stat": false, 00:21:02.873 "rdma_srq_size": 0, 00:21:02.873 "io_path_stat": false, 00:21:02.873 "allow_accel_sequence": false, 00:21:02.873 "rdma_max_cq_size": 0, 00:21:02.873 "rdma_cm_event_timeout_ms": 0, 00:21:02.873 "dhchap_digests": [ 00:21:02.873 "sha256", 00:21:02.873 "sha384", 00:21:02.873 "sha512" 00:21:02.873 ], 00:21:02.873 "dhchap_dhgroups": [ 00:21:02.873 "null", 00:21:02.873 "ffdhe2048", 00:21:02.873 "ffdhe3072", 00:21:02.873 "ffdhe4096", 00:21:02.873 "ffdhe6144", 00:21:02.873 "ffdhe8192" 00:21:02.873 ] 00:21:02.873 } 00:21:02.873 }, 00:21:02.873 { 00:21:02.873 "method": "bdev_nvme_attach_controller", 00:21:02.873 "params": { 00:21:02.873 "name": "nvme0", 00:21:02.873 "trtype": "TCP", 00:21:02.873 "adrfam": "IPv4", 00:21:02.873 "traddr": "10.0.0.2", 00:21:02.873 "trsvcid": "4420", 00:21:02.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.873 "prchk_reftag": false, 00:21:02.873 "prchk_guard": false, 00:21:02.873 "ctrlr_loss_timeout_sec": 0, 00:21:02.873 "reconnect_delay_sec": 0, 00:21:02.873 "fast_io_fail_timeout_sec": 0, 00:21:02.873 "psk": "key0", 00:21:02.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.873 "hdgst": false, 00:21:02.873 "ddgst": false 00:21:02.873 } 00:21:02.873 }, 00:21:02.873 { 00:21:02.873 "method": "bdev_nvme_set_hotplug", 00:21:02.873 "params": { 00:21:02.873 "period_us": 100000, 00:21:02.873 "enable": false 00:21:02.873 } 00:21:02.873 }, 00:21:02.873 { 00:21:02.873 "method": "bdev_enable_histogram", 00:21:02.873 "params": { 00:21:02.873 "name": "nvme0n1", 00:21:02.873 "enable": true 00:21:02.873 } 00:21:02.873 }, 00:21:02.873 { 00:21:02.873 "method": "bdev_wait_for_examine" 00:21:02.873 } 00:21:02.873 ] 00:21:02.873 }, 00:21:02.873 { 00:21:02.873 "subsystem": "nbd", 00:21:02.873 "config": [] 00:21:02.873 } 00:21:02.873 ] 00:21:02.873 }' 00:21:02.873 [2024-06-10 10:46:27.155103] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:21:02.873 [2024-06-10 10:46:27.155152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid877501 ] 00:21:03.134 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.134 [2024-06-10 10:46:27.228091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.134 [2024-06-10 10:46:27.282080] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.134 [2024-06-10 10:46:27.415696] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.705 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:03.705 10:46:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:21:03.705 10:46:27 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:03.705 10:46:27 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:03.965 10:46:28 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.965 10:46:28 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:03.965 Running I/O for 1 seconds... 00:21:04.908 00:21:04.908 Latency(us) 00:21:04.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.908 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:04.908 Verification LBA range: start 0x0 length 0x2000 00:21:04.908 nvme0n1 : 1.02 5091.35 19.89 0.00 0.00 24928.08 4532.91 40850.77 00:21:04.908 =================================================================================================================== 00:21:04.908 Total : 5091.35 19.89 0.00 0.00 24928.08 4532.91 40850.77 00:21:04.908 0 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:05.168 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:05.168 nvmf_trace.0 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 877501 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 877501 ']' 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 877501 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 877501 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 877501' 00:21:05.169 killing process with pid 877501 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 877501 00:21:05.169 Received shutdown signal, test time was about 1.000000 seconds 00:21:05.169 00:21:05.169 Latency(us) 00:21:05.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.169 =================================================================================================================== 00:21:05.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.169 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 877501 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.429 rmmod nvme_tcp 00:21:05.429 rmmod nvme_fabrics 00:21:05.429 rmmod nvme_keyring 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 877393 ']' 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 877393 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 877393 ']' 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 877393 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 877393 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 877393' 00:21:05.429 killing process with pid 877393 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 877393 00:21:05.429 [2024-06-10 10:46:29.597548] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:05.429 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 877393 00:21:05.690 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.690 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.690 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.690 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.690 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.690 10:46:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.690 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.690 10:46:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.604 10:46:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:07.604 10:46:31 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aqpEFTpTN2 /tmp/tmp.qyetXWRKiN /tmp/tmp.LLy1V17JdC 00:21:07.604 00:21:07.604 real 1m23.994s 00:21:07.604 user 2m8.723s 00:21:07.604 sys 0m27.471s 00:21:07.604 10:46:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:07.604 10:46:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.604 ************************************ 00:21:07.604 END TEST nvmf_tls 00:21:07.604 ************************************ 00:21:07.604 10:46:31 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:07.604 10:46:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:07.604 10:46:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:07.604 10:46:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.604 ************************************ 00:21:07.604 START TEST nvmf_fips 00:21:07.604 ************************************ 00:21:07.604 10:46:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:07.865 * Looking for test storage... 00:21:07.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.865 10:46:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:07.865 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:21:08.126 Error setting digest 00:21:08.126 0022924CAA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:08.126 0022924CAA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.126 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.127 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.127 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.127 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.127 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.127 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:08.127 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:08.127 10:46:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.127 10:46:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:16.269 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:16.269 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:16.269 Found net devices under 0000:31:00.0: cvl_0_0 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:16.269 Found net devices under 0000:31:00.1: cvl_0_1 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:16.269 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:16.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:21:16.270 00:21:16.270 --- 10.0.0.2 ping statistics --- 00:21:16.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.270 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:21:16.270 00:21:16.270 --- 10.0.0.1 ping statistics --- 00:21:16.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.270 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=882218 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 882218 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 882218 ']' 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:16.270 10:46:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.270 [2024-06-10 10:46:39.579500] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:21:16.270 [2024-06-10 10:46:39.579569] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.270 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.270 [2024-06-10 10:46:39.667656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.270 [2024-06-10 10:46:39.760765] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.270 [2024-06-10 10:46:39.760819] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.270 [2024-06-10 10:46:39.760827] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.270 [2024-06-10 10:46:39.760834] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.270 [2024-06-10 10:46:39.760840] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.270 [2024-06-10 10:46:39.760865] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:16.270 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:16.531 [2024-06-10 10:46:40.560734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.531 [2024-06-10 10:46:40.576700] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:16.531 [2024-06-10 10:46:40.576754] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.531 [2024-06-10 10:46:40.576970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.531 [2024-06-10 10:46:40.606800] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:16.531 malloc0 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=882533 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 882533 /var/tmp/bdevperf.sock 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 882533 ']' 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:16.531 10:46:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:16.531 [2024-06-10 10:46:40.698761] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:21:16.531 [2024-06-10 10:46:40.698834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882533 ] 00:21:16.531 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.531 [2024-06-10 10:46:40.756613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.792 [2024-06-10 10:46:40.819359] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.364 10:46:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:17.364 10:46:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:21:17.364 10:46:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:17.364 [2024-06-10 10:46:41.606901] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.364 [2024-06-10 10:46:41.606977] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:17.624 TLSTESTn1 00:21:17.624 10:46:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:17.624 Running I/O for 10 seconds... 00:21:27.623 00:21:27.623 Latency(us) 00:21:27.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.623 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:27.623 Verification LBA range: start 0x0 length 0x2000 00:21:27.623 TLSTESTn1 : 10.02 4810.65 18.79 0.00 0.00 26568.30 4805.97 45001.39 00:21:27.623 =================================================================================================================== 00:21:27.623 Total : 4810.65 18.79 0.00 0.00 26568.30 4805.97 45001.39 00:21:27.623 0 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:21:27.623 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:27.623 nvmf_trace.0 00:21:27.884 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:21:27.884 10:46:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 882533 00:21:27.884 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 882533 ']' 00:21:27.884 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 882533 00:21:27.884 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:21:27.884 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:27.884 10:46:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 882533 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 882533' 00:21:27.884 killing process with pid 882533 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 882533 00:21:27.884 Received shutdown signal, test time was about 10.000000 seconds 00:21:27.884 00:21:27.884 Latency(us) 00:21:27.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.884 =================================================================================================================== 00:21:27.884 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.884 [2024-06-10 10:46:52.002653] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 882533 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.884 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.884 rmmod nvme_tcp 00:21:27.884 rmmod nvme_fabrics 00:21:27.884 rmmod nvme_keyring 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 882218 ']' 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 882218 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 882218 ']' 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 882218 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 882218 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 882218' 00:21:28.144 killing process with pid 882218 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 882218 00:21:28.144 [2024-06-10 10:46:52.243445] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:28.144 [2024-06-10 10:46:52.243475] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 882218 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.144 10:46:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.688 10:46:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.688 10:46:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:30.688 00:21:30.688 real 0m22.554s 00:21:30.688 user 0m23.525s 00:21:30.688 sys 0m9.717s 00:21:30.688 10:46:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:30.688 10:46:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:30.688 ************************************ 00:21:30.688 END TEST nvmf_fips 00:21:30.688 ************************************ 00:21:30.688 10:46:54 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:30.688 10:46:54 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:30.688 10:46:54 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:30.688 10:46:54 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:30.688 10:46:54 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.688 10:46:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:37.279 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:37.279 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:37.279 Found net devices under 0000:31:00.0: cvl_0_0 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:37.279 Found net devices under 0000:31:00.1: cvl_0_1 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:37.279 10:47:01 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:37.279 10:47:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:37.279 10:47:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:37.279 10:47:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.279 ************************************ 00:21:37.279 START TEST nvmf_perf_adq 00:21:37.279 ************************************ 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:37.279 * Looking for test storage... 00:21:37.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.279 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.541 10:47:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:44.131 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:44.131 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:44.131 Found net devices under 0000:31:00.0: cvl_0_0 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:44.131 Found net devices under 0000:31:00.1: cvl_0_1 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:44.131 10:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:45.518 10:47:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:46.901 10:47:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:52.282 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:52.282 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:52.282 Found net devices under 0000:31:00.0: cvl_0_0 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:52.282 Found net devices under 0000:31:00.1: cvl_0_1 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.282 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:52.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:21:52.282 00:21:52.282 --- 10.0.0.2 ping statistics --- 00:21:52.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.283 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.449 ms 00:21:52.283 00:21:52.283 --- 10.0.0.1 ping statistics --- 00:21:52.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.283 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=894386 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 894386 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 894386 ']' 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:52.283 10:47:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:52.544 [2024-06-10 10:47:16.613084] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:21:52.544 [2024-06-10 10:47:16.613152] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.544 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.544 [2024-06-10 10:47:16.684895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.544 [2024-06-10 10:47:16.761712] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.544 [2024-06-10 10:47:16.761749] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.544 [2024-06-10 10:47:16.761757] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.544 [2024-06-10 10:47:16.761763] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.544 [2024-06-10 10:47:16.761769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.544 [2024-06-10 10:47:16.761914] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.544 [2024-06-10 10:47:16.762025] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.544 [2024-06-10 10:47:16.762148] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.544 [2024-06-10 10:47:16.762149] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.114 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:53.114 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:21:53.114 10:47:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:53.114 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:53.114 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 [2024-06-10 10:47:17.568504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 Malloc1 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.375 [2024-06-10 10:47:17.627671] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:53.375 [2024-06-10 10:47:17.627914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=894579 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:53.375 10:47:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:53.635 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.548 10:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:55.548 10:47:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.548 10:47:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.548 10:47:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.548 10:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:55.548 "tick_rate": 2400000000, 00:21:55.548 "poll_groups": [ 00:21:55.548 { 00:21:55.548 "name": "nvmf_tgt_poll_group_000", 00:21:55.548 "admin_qpairs": 1, 00:21:55.548 "io_qpairs": 1, 00:21:55.548 "current_admin_qpairs": 1, 00:21:55.548 "current_io_qpairs": 1, 00:21:55.548 "pending_bdev_io": 0, 00:21:55.548 "completed_nvme_io": 20869, 00:21:55.549 "transports": [ 00:21:55.549 { 00:21:55.549 "trtype": "TCP" 00:21:55.549 } 00:21:55.549 ] 00:21:55.549 }, 00:21:55.549 { 00:21:55.549 "name": "nvmf_tgt_poll_group_001", 00:21:55.549 "admin_qpairs": 0, 00:21:55.549 "io_qpairs": 1, 00:21:55.549 "current_admin_qpairs": 0, 00:21:55.549 "current_io_qpairs": 1, 00:21:55.549 "pending_bdev_io": 0, 00:21:55.549 "completed_nvme_io": 27894, 00:21:55.549 "transports": [ 00:21:55.549 { 00:21:55.549 "trtype": "TCP" 00:21:55.549 } 00:21:55.549 ] 00:21:55.549 }, 00:21:55.549 { 00:21:55.549 "name": "nvmf_tgt_poll_group_002", 00:21:55.549 "admin_qpairs": 0, 00:21:55.549 "io_qpairs": 1, 00:21:55.549 "current_admin_qpairs": 0, 00:21:55.549 "current_io_qpairs": 1, 00:21:55.549 "pending_bdev_io": 0, 00:21:55.549 "completed_nvme_io": 20459, 00:21:55.549 "transports": [ 00:21:55.549 { 00:21:55.549 "trtype": "TCP" 00:21:55.549 } 00:21:55.549 ] 00:21:55.549 }, 00:21:55.549 { 00:21:55.549 "name": "nvmf_tgt_poll_group_003", 00:21:55.549 "admin_qpairs": 0, 00:21:55.549 "io_qpairs": 1, 00:21:55.549 "current_admin_qpairs": 0, 00:21:55.549 "current_io_qpairs": 1, 00:21:55.549 "pending_bdev_io": 0, 00:21:55.549 "completed_nvme_io": 20821, 00:21:55.549 "transports": [ 00:21:55.549 { 00:21:55.549 "trtype": "TCP" 00:21:55.549 } 00:21:55.549 ] 00:21:55.549 } 00:21:55.549 ] 00:21:55.549 }' 00:21:55.549 10:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:55.549 10:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:55.549 10:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:55.549 10:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:55.549 10:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 894579 00:22:03.689 Initializing NVMe Controllers 00:22:03.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:03.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:03.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:03.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:03.689 Initialization complete. Launching workers. 00:22:03.689 ======================================================== 00:22:03.689 Latency(us) 00:22:03.689 Device Information : IOPS MiB/s Average min max 00:22:03.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13949.40 54.49 4588.90 1048.52 8664.34 00:22:03.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14877.20 58.11 4301.89 1078.99 11915.45 00:22:03.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14493.60 56.62 4415.95 1272.92 9155.78 00:22:03.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11615.80 45.37 5510.09 1767.11 11521.01 00:22:03.689 ======================================================== 00:22:03.689 Total : 54936.00 214.59 4660.32 1048.52 11915.45 00:22:03.689 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.689 rmmod nvme_tcp 00:22:03.689 rmmod nvme_fabrics 00:22:03.689 rmmod nvme_keyring 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 894386 ']' 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 894386 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 894386 ']' 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 894386 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 894386 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 894386' 00:22:03.689 killing process with pid 894386 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 894386 00:22:03.689 [2024-06-10 10:47:27.919339] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:03.689 10:47:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 894386 00:22:03.951 10:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.951 10:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.951 10:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.951 10:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.951 10:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.951 10:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.951 10:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.951 10:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.867 10:47:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.867 10:47:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:05.867 10:47:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:07.784 10:47:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:09.696 10:47:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.985 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:14.986 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:14.986 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:14.986 Found net devices under 0000:31:00.0: cvl_0_0 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:14.986 Found net devices under 0000:31:00.1: cvl_0_1 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:22:14.986 00:22:14.986 --- 10.0.0.2 ping statistics --- 00:22:14.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.986 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:22:14.986 00:22:14.986 --- 10.0.0.1 ping statistics --- 00:22:14.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.986 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:14.986 net.core.busy_poll = 1 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:14.986 net.core.busy_read = 1 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:14.986 10:47:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=899170 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 899170 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 899170 ']' 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:14.986 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.986 [2024-06-10 10:47:39.219285] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:22:14.987 [2024-06-10 10:47:39.219362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.987 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.249 [2024-06-10 10:47:39.293217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.249 [2024-06-10 10:47:39.369204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.249 [2024-06-10 10:47:39.369239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.249 [2024-06-10 10:47:39.369253] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.249 [2024-06-10 10:47:39.369259] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.249 [2024-06-10 10:47:39.369265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.249 [2024-06-10 10:47:39.369358] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.249 [2024-06-10 10:47:39.369629] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.249 [2024-06-10 10:47:39.369788] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.249 [2024-06-10 10:47:39.369788] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.821 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:15.821 10:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:22:15.821 10:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.821 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.082 [2024-06-10 10:47:40.172512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.082 Malloc1 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.082 [2024-06-10 10:47:40.231722] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:16.082 [2024-06-10 10:47:40.231965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=899395 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:16.082 10:47:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:16.082 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.993 10:47:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:17.994 10:47:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:17.994 10:47:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.994 10:47:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:17.994 10:47:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:17.994 "tick_rate": 2400000000, 00:22:17.994 "poll_groups": [ 00:22:17.994 { 00:22:17.994 "name": "nvmf_tgt_poll_group_000", 00:22:17.994 "admin_qpairs": 1, 00:22:17.994 "io_qpairs": 2, 00:22:17.994 "current_admin_qpairs": 1, 00:22:17.994 "current_io_qpairs": 2, 00:22:17.994 "pending_bdev_io": 0, 00:22:17.994 "completed_nvme_io": 27749, 00:22:17.994 "transports": [ 00:22:17.994 { 00:22:17.994 "trtype": "TCP" 00:22:17.994 } 00:22:17.994 ] 00:22:17.994 }, 00:22:17.994 { 00:22:17.994 "name": "nvmf_tgt_poll_group_001", 00:22:17.994 "admin_qpairs": 0, 00:22:17.994 "io_qpairs": 2, 00:22:17.994 "current_admin_qpairs": 0, 00:22:17.994 "current_io_qpairs": 2, 00:22:17.994 "pending_bdev_io": 0, 00:22:17.994 "completed_nvme_io": 41820, 00:22:17.994 "transports": [ 00:22:17.994 { 00:22:17.994 "trtype": "TCP" 00:22:17.994 } 00:22:17.994 ] 00:22:17.994 }, 00:22:17.994 { 00:22:17.994 "name": "nvmf_tgt_poll_group_002", 00:22:17.994 "admin_qpairs": 0, 00:22:17.994 "io_qpairs": 0, 00:22:17.994 "current_admin_qpairs": 0, 00:22:17.994 "current_io_qpairs": 0, 00:22:17.994 "pending_bdev_io": 0, 00:22:17.994 "completed_nvme_io": 0, 00:22:17.994 "transports": [ 00:22:17.994 { 00:22:17.994 "trtype": "TCP" 00:22:17.994 } 00:22:17.994 ] 00:22:17.994 }, 00:22:17.994 { 00:22:17.994 "name": "nvmf_tgt_poll_group_003", 00:22:17.994 "admin_qpairs": 0, 00:22:17.994 "io_qpairs": 0, 00:22:17.994 "current_admin_qpairs": 0, 00:22:17.994 "current_io_qpairs": 0, 00:22:17.994 "pending_bdev_io": 0, 00:22:17.994 "completed_nvme_io": 0, 00:22:17.994 "transports": [ 00:22:17.994 { 00:22:17.994 "trtype": "TCP" 00:22:17.994 } 00:22:17.994 ] 00:22:17.994 } 00:22:17.994 ] 00:22:17.994 }' 00:22:17.994 10:47:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:17.994 10:47:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:18.254 10:47:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:18.254 10:47:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:18.254 10:47:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 899395 00:22:26.389 Initializing NVMe Controllers 00:22:26.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:26.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:26.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:26.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:26.389 Initialization complete. Launching workers. 00:22:26.389 ======================================================== 00:22:26.389 Latency(us) 00:22:26.389 Device Information : IOPS MiB/s Average min max 00:22:26.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13251.16 51.76 4829.59 1060.44 52126.05 00:22:26.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10144.76 39.63 6328.88 1194.70 49347.12 00:22:26.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11107.93 43.39 5762.33 1346.26 48766.48 00:22:26.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5775.51 22.56 11104.83 1473.09 52776.70 00:22:26.389 ======================================================== 00:22:26.389 Total : 40279.36 157.34 6364.21 1060.44 52776.70 00:22:26.389 00:22:26.389 10:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:26.389 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:26.390 rmmod nvme_tcp 00:22:26.390 rmmod nvme_fabrics 00:22:26.390 rmmod nvme_keyring 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 899170 ']' 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 899170 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 899170 ']' 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 899170 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 899170 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 899170' 00:22:26.390 killing process with pid 899170 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 899170 00:22:26.390 [2024-06-10 10:47:50.557269] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:26.390 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 899170 00:22:26.652 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.652 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.652 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.652 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.652 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.652 10:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.652 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.652 10:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.969 10:47:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.969 10:47:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:29.969 00:22:29.969 real 0m52.331s 00:22:29.969 user 2m47.116s 00:22:29.969 sys 0m11.205s 00:22:29.969 10:47:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:29.969 10:47:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:29.969 ************************************ 00:22:29.969 END TEST nvmf_perf_adq 00:22:29.969 ************************************ 00:22:29.969 10:47:53 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:29.969 10:47:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:29.969 10:47:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:29.969 10:47:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:29.969 ************************************ 00:22:29.969 START TEST nvmf_shutdown 00:22:29.969 ************************************ 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:29.969 * Looking for test storage... 00:22:29.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.969 10:47:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:29.970 10:47:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:29.970 ************************************ 00:22:29.970 START TEST nvmf_shutdown_tc1 00:22:29.970 ************************************ 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:29.970 10:47:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.116 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:38.117 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:38.117 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:38.117 Found net devices under 0000:31:00.0: cvl_0_0 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:38.117 Found net devices under 0000:31:00.1: cvl_0_1 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:22:38.117 00:22:38.117 --- 10.0.0.2 ping statistics --- 00:22:38.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.117 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:22:38.117 00:22:38.117 --- 10.0.0.1 ping statistics --- 00:22:38.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.117 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=906009 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 906009 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 906009 ']' 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:38.117 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.118 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:38.118 10:48:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.118 [2024-06-10 10:48:01.508697] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:22:38.118 [2024-06-10 10:48:01.508761] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.118 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.118 [2024-06-10 10:48:01.600217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.118 [2024-06-10 10:48:01.694939] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.118 [2024-06-10 10:48:01.694998] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.118 [2024-06-10 10:48:01.695006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.118 [2024-06-10 10:48:01.695012] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.118 [2024-06-10 10:48:01.695018] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.118 [2024-06-10 10:48:01.695159] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.118 [2024-06-10 10:48:01.695324] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.118 [2024-06-10 10:48:01.695458] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.118 [2024-06-10 10:48:01.695458] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.118 [2024-06-10 10:48:02.347757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.118 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.378 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.378 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.378 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:38.378 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:38.378 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:38.378 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.378 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.378 Malloc1 00:22:38.378 [2024-06-10 10:48:02.451079] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:38.378 [2024-06-10 10:48:02.451308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.378 Malloc2 00:22:38.378 Malloc3 00:22:38.378 Malloc4 00:22:38.378 Malloc5 00:22:38.378 Malloc6 00:22:38.378 Malloc7 00:22:38.640 Malloc8 00:22:38.640 Malloc9 00:22:38.640 Malloc10 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=906417 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 906417 /var/tmp/bdevperf.sock 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 906417 ']' 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.640 { 00:22:38.640 "params": { 00:22:38.640 "name": "Nvme$subsystem", 00:22:38.640 "trtype": "$TEST_TRANSPORT", 00:22:38.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.640 "adrfam": "ipv4", 00:22:38.640 "trsvcid": "$NVMF_PORT", 00:22:38.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.640 "hdgst": ${hdgst:-false}, 00:22:38.640 "ddgst": ${ddgst:-false} 00:22:38.640 }, 00:22:38.640 "method": "bdev_nvme_attach_controller" 00:22:38.640 } 00:22:38.640 EOF 00:22:38.640 )") 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.640 { 00:22:38.640 "params": { 00:22:38.640 "name": "Nvme$subsystem", 00:22:38.640 "trtype": "$TEST_TRANSPORT", 00:22:38.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.640 "adrfam": "ipv4", 00:22:38.640 "trsvcid": "$NVMF_PORT", 00:22:38.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.640 "hdgst": ${hdgst:-false}, 00:22:38.640 "ddgst": ${ddgst:-false} 00:22:38.640 }, 00:22:38.640 "method": "bdev_nvme_attach_controller" 00:22:38.640 } 00:22:38.640 EOF 00:22:38.640 )") 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.640 { 00:22:38.640 "params": { 00:22:38.640 "name": "Nvme$subsystem", 00:22:38.640 "trtype": "$TEST_TRANSPORT", 00:22:38.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.640 "adrfam": "ipv4", 00:22:38.640 "trsvcid": "$NVMF_PORT", 00:22:38.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.640 "hdgst": ${hdgst:-false}, 00:22:38.640 "ddgst": ${ddgst:-false} 00:22:38.640 }, 00:22:38.640 "method": "bdev_nvme_attach_controller" 00:22:38.640 } 00:22:38.640 EOF 00:22:38.640 )") 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.640 { 00:22:38.640 "params": { 00:22:38.640 "name": "Nvme$subsystem", 00:22:38.640 "trtype": "$TEST_TRANSPORT", 00:22:38.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.640 "adrfam": "ipv4", 00:22:38.640 "trsvcid": "$NVMF_PORT", 00:22:38.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.640 "hdgst": ${hdgst:-false}, 00:22:38.640 "ddgst": ${ddgst:-false} 00:22:38.640 }, 00:22:38.640 "method": "bdev_nvme_attach_controller" 00:22:38.640 } 00:22:38.640 EOF 00:22:38.640 )") 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.640 { 00:22:38.640 "params": { 00:22:38.640 "name": "Nvme$subsystem", 00:22:38.640 "trtype": "$TEST_TRANSPORT", 00:22:38.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.640 "adrfam": "ipv4", 00:22:38.640 "trsvcid": "$NVMF_PORT", 00:22:38.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.640 "hdgst": ${hdgst:-false}, 00:22:38.640 "ddgst": ${ddgst:-false} 00:22:38.640 }, 00:22:38.640 "method": "bdev_nvme_attach_controller" 00:22:38.640 } 00:22:38.640 EOF 00:22:38.640 )") 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.640 { 00:22:38.640 "params": { 00:22:38.640 "name": "Nvme$subsystem", 00:22:38.640 "trtype": "$TEST_TRANSPORT", 00:22:38.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.640 "adrfam": "ipv4", 00:22:38.640 "trsvcid": "$NVMF_PORT", 00:22:38.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.640 "hdgst": ${hdgst:-false}, 00:22:38.640 "ddgst": ${ddgst:-false} 00:22:38.640 }, 00:22:38.640 "method": "bdev_nvme_attach_controller" 00:22:38.640 } 00:22:38.640 EOF 00:22:38.640 )") 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.640 { 00:22:38.640 "params": { 00:22:38.640 "name": "Nvme$subsystem", 00:22:38.640 "trtype": "$TEST_TRANSPORT", 00:22:38.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.640 "adrfam": "ipv4", 00:22:38.640 "trsvcid": "$NVMF_PORT", 00:22:38.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.640 "hdgst": ${hdgst:-false}, 00:22:38.640 "ddgst": ${ddgst:-false} 00:22:38.640 }, 00:22:38.640 "method": "bdev_nvme_attach_controller" 00:22:38.640 } 00:22:38.640 EOF 00:22:38.640 )") 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.640 { 00:22:38.640 "params": { 00:22:38.640 "name": "Nvme$subsystem", 00:22:38.640 "trtype": "$TEST_TRANSPORT", 00:22:38.640 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.640 "adrfam": "ipv4", 00:22:38.640 "trsvcid": "$NVMF_PORT", 00:22:38.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.640 "hdgst": ${hdgst:-false}, 00:22:38.640 "ddgst": ${ddgst:-false} 00:22:38.640 }, 00:22:38.640 "method": "bdev_nvme_attach_controller" 00:22:38.640 } 00:22:38.640 EOF 00:22:38.640 )") 00:22:38.640 [2024-06-10 10:48:02.909400] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:22:38.640 [2024-06-10 10:48:02.909465] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.640 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.641 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.641 { 00:22:38.641 "params": { 00:22:38.641 "name": "Nvme$subsystem", 00:22:38.641 "trtype": "$TEST_TRANSPORT", 00:22:38.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.641 "adrfam": "ipv4", 00:22:38.641 "trsvcid": "$NVMF_PORT", 00:22:38.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.641 "hdgst": ${hdgst:-false}, 00:22:38.641 "ddgst": ${ddgst:-false} 00:22:38.641 }, 00:22:38.641 "method": "bdev_nvme_attach_controller" 00:22:38.641 } 00:22:38.641 EOF 00:22:38.641 )") 00:22:38.641 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.641 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:38.641 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:38.641 { 00:22:38.641 "params": { 00:22:38.641 "name": "Nvme$subsystem", 00:22:38.641 "trtype": "$TEST_TRANSPORT", 00:22:38.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.641 "adrfam": "ipv4", 00:22:38.641 "trsvcid": "$NVMF_PORT", 00:22:38.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.641 "hdgst": ${hdgst:-false}, 00:22:38.641 "ddgst": ${ddgst:-false} 00:22:38.641 }, 00:22:38.641 "method": "bdev_nvme_attach_controller" 00:22:38.641 } 00:22:38.641 EOF 00:22:38.641 )") 00:22:38.641 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:38.901 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:38.901 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:38.901 10:48:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:38.901 "params": { 00:22:38.901 "name": "Nvme1", 00:22:38.901 "trtype": "tcp", 00:22:38.901 "traddr": "10.0.0.2", 00:22:38.901 "adrfam": "ipv4", 00:22:38.901 "trsvcid": "4420", 00:22:38.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.901 "hdgst": false, 00:22:38.901 "ddgst": false 00:22:38.901 }, 00:22:38.901 "method": "bdev_nvme_attach_controller" 00:22:38.901 },{ 00:22:38.901 "params": { 00:22:38.901 "name": "Nvme2", 00:22:38.901 "trtype": "tcp", 00:22:38.901 "traddr": "10.0.0.2", 00:22:38.901 "adrfam": "ipv4", 00:22:38.901 "trsvcid": "4420", 00:22:38.901 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.901 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:38.901 "hdgst": false, 00:22:38.901 "ddgst": false 00:22:38.901 }, 00:22:38.901 "method": "bdev_nvme_attach_controller" 00:22:38.901 },{ 00:22:38.901 "params": { 00:22:38.901 "name": "Nvme3", 00:22:38.901 "trtype": "tcp", 00:22:38.901 "traddr": "10.0.0.2", 00:22:38.901 "adrfam": "ipv4", 00:22:38.901 "trsvcid": "4420", 00:22:38.901 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:38.901 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:38.901 "hdgst": false, 00:22:38.901 "ddgst": false 00:22:38.901 }, 00:22:38.901 "method": "bdev_nvme_attach_controller" 00:22:38.901 },{ 00:22:38.901 "params": { 00:22:38.901 "name": "Nvme4", 00:22:38.901 "trtype": "tcp", 00:22:38.901 "traddr": "10.0.0.2", 00:22:38.901 "adrfam": "ipv4", 00:22:38.901 "trsvcid": "4420", 00:22:38.901 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:38.901 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:38.901 "hdgst": false, 00:22:38.901 "ddgst": false 00:22:38.901 }, 00:22:38.901 "method": "bdev_nvme_attach_controller" 00:22:38.901 },{ 00:22:38.901 "params": { 00:22:38.901 "name": "Nvme5", 00:22:38.901 "trtype": "tcp", 00:22:38.901 "traddr": "10.0.0.2", 00:22:38.901 "adrfam": "ipv4", 00:22:38.901 "trsvcid": "4420", 00:22:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:38.902 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:38.902 "hdgst": false, 00:22:38.902 "ddgst": false 00:22:38.902 }, 00:22:38.902 "method": "bdev_nvme_attach_controller" 00:22:38.902 },{ 00:22:38.902 "params": { 00:22:38.902 "name": "Nvme6", 00:22:38.902 "trtype": "tcp", 00:22:38.902 "traddr": "10.0.0.2", 00:22:38.902 "adrfam": "ipv4", 00:22:38.902 "trsvcid": "4420", 00:22:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:38.902 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:38.902 "hdgst": false, 00:22:38.902 "ddgst": false 00:22:38.902 }, 00:22:38.902 "method": "bdev_nvme_attach_controller" 00:22:38.902 },{ 00:22:38.902 "params": { 00:22:38.902 "name": "Nvme7", 00:22:38.902 "trtype": "tcp", 00:22:38.902 "traddr": "10.0.0.2", 00:22:38.902 "adrfam": "ipv4", 00:22:38.902 "trsvcid": "4420", 00:22:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:38.902 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:38.902 "hdgst": false, 00:22:38.902 "ddgst": false 00:22:38.902 }, 00:22:38.902 "method": "bdev_nvme_attach_controller" 00:22:38.902 },{ 00:22:38.902 "params": { 00:22:38.902 "name": "Nvme8", 00:22:38.902 "trtype": "tcp", 00:22:38.902 "traddr": "10.0.0.2", 00:22:38.902 "adrfam": "ipv4", 00:22:38.902 "trsvcid": "4420", 00:22:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:38.902 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:38.902 "hdgst": false, 00:22:38.902 "ddgst": false 00:22:38.902 }, 00:22:38.902 "method": "bdev_nvme_attach_controller" 00:22:38.902 },{ 00:22:38.902 "params": { 00:22:38.902 "name": "Nvme9", 00:22:38.902 "trtype": "tcp", 00:22:38.902 "traddr": "10.0.0.2", 00:22:38.902 "adrfam": "ipv4", 00:22:38.902 "trsvcid": "4420", 00:22:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:38.902 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:38.902 "hdgst": false, 00:22:38.902 "ddgst": false 00:22:38.902 }, 00:22:38.902 "method": "bdev_nvme_attach_controller" 00:22:38.902 },{ 00:22:38.902 "params": { 00:22:38.902 "name": "Nvme10", 00:22:38.902 "trtype": "tcp", 00:22:38.902 "traddr": "10.0.0.2", 00:22:38.902 "adrfam": "ipv4", 00:22:38.902 "trsvcid": "4420", 00:22:38.902 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:38.902 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:38.902 "hdgst": false, 00:22:38.902 "ddgst": false 00:22:38.902 }, 00:22:38.902 "method": "bdev_nvme_attach_controller" 00:22:38.902 }' 00:22:38.902 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.902 [2024-06-10 10:48:02.971156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.902 [2024-06-10 10:48:03.035779] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.286 10:48:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:40.286 10:48:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:22:40.286 10:48:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:40.286 10:48:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.286 10:48:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:40.286 10:48:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.286 10:48:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 906417 00:22:40.286 10:48:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:40.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 906417 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:40.286 10:48:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 906009 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.231 { 00:22:41.231 "params": { 00:22:41.231 "name": "Nvme$subsystem", 00:22:41.231 "trtype": "$TEST_TRANSPORT", 00:22:41.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.231 "adrfam": "ipv4", 00:22:41.231 "trsvcid": "$NVMF_PORT", 00:22:41.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.231 "hdgst": ${hdgst:-false}, 00:22:41.231 "ddgst": ${ddgst:-false} 00:22:41.231 }, 00:22:41.231 "method": "bdev_nvme_attach_controller" 00:22:41.231 } 00:22:41.231 EOF 00:22:41.231 )") 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.231 { 00:22:41.231 "params": { 00:22:41.231 "name": "Nvme$subsystem", 00:22:41.231 "trtype": "$TEST_TRANSPORT", 00:22:41.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.231 "adrfam": "ipv4", 00:22:41.231 "trsvcid": "$NVMF_PORT", 00:22:41.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.231 "hdgst": ${hdgst:-false}, 00:22:41.231 "ddgst": ${ddgst:-false} 00:22:41.231 }, 00:22:41.231 "method": "bdev_nvme_attach_controller" 00:22:41.231 } 00:22:41.231 EOF 00:22:41.231 )") 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.231 { 00:22:41.231 "params": { 00:22:41.231 "name": "Nvme$subsystem", 00:22:41.231 "trtype": "$TEST_TRANSPORT", 00:22:41.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.231 "adrfam": "ipv4", 00:22:41.231 "trsvcid": "$NVMF_PORT", 00:22:41.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.231 "hdgst": ${hdgst:-false}, 00:22:41.231 "ddgst": ${ddgst:-false} 00:22:41.231 }, 00:22:41.231 "method": "bdev_nvme_attach_controller" 00:22:41.231 } 00:22:41.231 EOF 00:22:41.231 )") 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.231 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.231 { 00:22:41.231 "params": { 00:22:41.232 "name": "Nvme$subsystem", 00:22:41.232 "trtype": "$TEST_TRANSPORT", 00:22:41.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "$NVMF_PORT", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.232 "hdgst": ${hdgst:-false}, 00:22:41.232 "ddgst": ${ddgst:-false} 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 } 00:22:41.232 EOF 00:22:41.232 )") 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.232 { 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme$subsystem", 00:22:41.232 "trtype": "$TEST_TRANSPORT", 00:22:41.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "$NVMF_PORT", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.232 "hdgst": ${hdgst:-false}, 00:22:41.232 "ddgst": ${ddgst:-false} 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 } 00:22:41.232 EOF 00:22:41.232 )") 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.232 { 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme$subsystem", 00:22:41.232 "trtype": "$TEST_TRANSPORT", 00:22:41.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "$NVMF_PORT", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.232 "hdgst": ${hdgst:-false}, 00:22:41.232 "ddgst": ${ddgst:-false} 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 } 00:22:41.232 EOF 00:22:41.232 )") 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.232 [2024-06-10 10:48:05.426460] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:22:41.232 [2024-06-10 10:48:05.426516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid906832 ] 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.232 { 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme$subsystem", 00:22:41.232 "trtype": "$TEST_TRANSPORT", 00:22:41.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "$NVMF_PORT", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.232 "hdgst": ${hdgst:-false}, 00:22:41.232 "ddgst": ${ddgst:-false} 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 } 00:22:41.232 EOF 00:22:41.232 )") 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.232 { 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme$subsystem", 00:22:41.232 "trtype": "$TEST_TRANSPORT", 00:22:41.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "$NVMF_PORT", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.232 "hdgst": ${hdgst:-false}, 00:22:41.232 "ddgst": ${ddgst:-false} 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 } 00:22:41.232 EOF 00:22:41.232 )") 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.232 { 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme$subsystem", 00:22:41.232 "trtype": "$TEST_TRANSPORT", 00:22:41.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "$NVMF_PORT", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.232 "hdgst": ${hdgst:-false}, 00:22:41.232 "ddgst": ${ddgst:-false} 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 } 00:22:41.232 EOF 00:22:41.232 )") 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:41.232 { 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme$subsystem", 00:22:41.232 "trtype": "$TEST_TRANSPORT", 00:22:41.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "$NVMF_PORT", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:41.232 "hdgst": ${hdgst:-false}, 00:22:41.232 "ddgst": ${ddgst:-false} 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 } 00:22:41.232 EOF 00:22:41.232 )") 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:41.232 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:41.232 10:48:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme1", 00:22:41.232 "trtype": "tcp", 00:22:41.232 "traddr": "10.0.0.2", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "4420", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:41.232 "hdgst": false, 00:22:41.232 "ddgst": false 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 },{ 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme2", 00:22:41.232 "trtype": "tcp", 00:22:41.232 "traddr": "10.0.0.2", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "4420", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:41.232 "hdgst": false, 00:22:41.232 "ddgst": false 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 },{ 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme3", 00:22:41.232 "trtype": "tcp", 00:22:41.232 "traddr": "10.0.0.2", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "4420", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:41.232 "hdgst": false, 00:22:41.232 "ddgst": false 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 },{ 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme4", 00:22:41.232 "trtype": "tcp", 00:22:41.232 "traddr": "10.0.0.2", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "4420", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:41.232 "hdgst": false, 00:22:41.232 "ddgst": false 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 },{ 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme5", 00:22:41.232 "trtype": "tcp", 00:22:41.232 "traddr": "10.0.0.2", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "4420", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:41.232 "hdgst": false, 00:22:41.232 "ddgst": false 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 },{ 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme6", 00:22:41.232 "trtype": "tcp", 00:22:41.232 "traddr": "10.0.0.2", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "4420", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:41.232 "hdgst": false, 00:22:41.232 "ddgst": false 00:22:41.232 }, 00:22:41.232 "method": "bdev_nvme_attach_controller" 00:22:41.232 },{ 00:22:41.232 "params": { 00:22:41.232 "name": "Nvme7", 00:22:41.232 "trtype": "tcp", 00:22:41.232 "traddr": "10.0.0.2", 00:22:41.232 "adrfam": "ipv4", 00:22:41.232 "trsvcid": "4420", 00:22:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:41.232 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:41.233 "hdgst": false, 00:22:41.233 "ddgst": false 00:22:41.233 }, 00:22:41.233 "method": "bdev_nvme_attach_controller" 00:22:41.233 },{ 00:22:41.233 "params": { 00:22:41.233 "name": "Nvme8", 00:22:41.233 "trtype": "tcp", 00:22:41.233 "traddr": "10.0.0.2", 00:22:41.233 "adrfam": "ipv4", 00:22:41.233 "trsvcid": "4420", 00:22:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:41.233 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:41.233 "hdgst": false, 00:22:41.233 "ddgst": false 00:22:41.233 }, 00:22:41.233 "method": "bdev_nvme_attach_controller" 00:22:41.233 },{ 00:22:41.233 "params": { 00:22:41.233 "name": "Nvme9", 00:22:41.233 "trtype": "tcp", 00:22:41.233 "traddr": "10.0.0.2", 00:22:41.233 "adrfam": "ipv4", 00:22:41.233 "trsvcid": "4420", 00:22:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:41.233 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:41.233 "hdgst": false, 00:22:41.233 "ddgst": false 00:22:41.233 }, 00:22:41.233 "method": "bdev_nvme_attach_controller" 00:22:41.233 },{ 00:22:41.233 "params": { 00:22:41.233 "name": "Nvme10", 00:22:41.233 "trtype": "tcp", 00:22:41.233 "traddr": "10.0.0.2", 00:22:41.233 "adrfam": "ipv4", 00:22:41.233 "trsvcid": "4420", 00:22:41.233 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:41.233 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:41.233 "hdgst": false, 00:22:41.233 "ddgst": false 00:22:41.233 }, 00:22:41.233 "method": "bdev_nvme_attach_controller" 00:22:41.233 }' 00:22:41.233 [2024-06-10 10:48:05.488160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.545 [2024-06-10 10:48:05.552797] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.929 Running I/O for 1 seconds... 00:22:44.312 00:22:44.312 Latency(us) 00:22:44.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.312 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.312 Verification LBA range: start 0x0 length 0x400 00:22:44.312 Nvme1n1 : 1.16 221.40 13.84 0.00 0.00 286093.44 18350.08 248162.99 00:22:44.312 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.312 Verification LBA range: start 0x0 length 0x400 00:22:44.312 Nvme2n1 : 1.12 228.76 14.30 0.00 0.00 272121.17 38010.88 228939.09 00:22:44.313 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.313 Verification LBA range: start 0x0 length 0x400 00:22:44.313 Nvme3n1 : 1.11 234.60 14.66 0.00 0.00 258843.82 4450.99 242920.11 00:22:44.313 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.313 Verification LBA range: start 0x0 length 0x400 00:22:44.313 Nvme4n1 : 1.16 274.69 17.17 0.00 0.00 218938.03 13380.27 255153.49 00:22:44.313 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.313 Verification LBA range: start 0x0 length 0x400 00:22:44.313 Nvme5n1 : 1.12 228.20 14.26 0.00 0.00 257690.24 20206.93 248162.99 00:22:44.313 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.313 Verification LBA range: start 0x0 length 0x400 00:22:44.313 Nvme6n1 : 1.19 269.55 16.85 0.00 0.00 215689.39 19442.35 242920.11 00:22:44.313 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.313 Verification LBA range: start 0x0 length 0x400 00:22:44.313 Nvme7n1 : 1.20 213.37 13.34 0.00 0.00 267937.92 25668.27 274377.39 00:22:44.313 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.313 Verification LBA range: start 0x0 length 0x400 00:22:44.313 Nvme8n1 : 1.25 255.76 15.98 0.00 0.00 212751.87 18677.76 255153.49 00:22:44.313 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.313 Verification LBA range: start 0x0 length 0x400 00:22:44.313 Nvme9n1 : 1.19 215.03 13.44 0.00 0.00 256017.71 21626.88 258648.75 00:22:44.313 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:44.313 Verification LBA range: start 0x0 length 0x400 00:22:44.313 Nvme10n1 : 1.22 262.10 16.38 0.00 0.00 206986.24 9994.24 256901.12 00:22:44.313 =================================================================================================================== 00:22:44.313 Total : 2403.46 150.22 0.00 0.00 242452.85 4450.99 274377.39 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.313 rmmod nvme_tcp 00:22:44.313 rmmod nvme_fabrics 00:22:44.313 rmmod nvme_keyring 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 906009 ']' 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 906009 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 906009 ']' 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 906009 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 906009 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 906009' 00:22:44.313 killing process with pid 906009 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 906009 00:22:44.313 [2024-06-10 10:48:08.454978] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:44.313 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 906009 00:22:44.573 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:44.573 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:44.573 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:44.573 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.573 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:44.573 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.573 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.573 10:48:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.484 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:46.484 00:22:46.484 real 0m16.717s 00:22:46.484 user 0m33.972s 00:22:46.484 sys 0m6.739s 00:22:46.484 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:46.484 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:46.484 ************************************ 00:22:46.484 END TEST nvmf_shutdown_tc1 00:22:46.484 ************************************ 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:46.745 ************************************ 00:22:46.745 START TEST nvmf_shutdown_tc2 00:22:46.745 ************************************ 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.745 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:46.746 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:46.746 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:46.746 Found net devices under 0000:31:00.0: cvl_0_0 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:46.746 Found net devices under 0000:31:00.1: cvl_0_1 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.746 10:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.746 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.746 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:22:47.007 00:22:47.007 --- 10.0.0.2 ping statistics --- 00:22:47.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.007 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:47.007 00:22:47.007 --- 10.0.0.1 ping statistics --- 00:22:47.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.007 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=908644 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 908644 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 908644 ']' 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:47.007 10:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.007 [2024-06-10 10:48:11.272709] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:22:47.007 [2024-06-10 10:48:11.272756] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.268 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.268 [2024-06-10 10:48:11.350264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.268 [2024-06-10 10:48:11.404525] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.268 [2024-06-10 10:48:11.404556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.268 [2024-06-10 10:48:11.404562] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.268 [2024-06-10 10:48:11.404566] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.268 [2024-06-10 10:48:11.404570] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.268 [2024-06-10 10:48:11.404684] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.268 [2024-06-10 10:48:11.404839] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.268 [2024-06-10 10:48:11.404993] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.268 [2024-06-10 10:48:11.404995] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.841 [2024-06-10 10:48:12.077393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.841 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.101 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.101 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.101 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:48.101 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:48.101 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:48.101 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.101 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.101 Malloc1 00:22:48.101 [2024-06-10 10:48:12.176006] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:48.101 [2024-06-10 10:48:12.176187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.102 Malloc2 00:22:48.102 Malloc3 00:22:48.102 Malloc4 00:22:48.102 Malloc5 00:22:48.102 Malloc6 00:22:48.102 Malloc7 00:22:48.361 Malloc8 00:22:48.361 Malloc9 00:22:48.361 Malloc10 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=908840 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 908840 /var/tmp/bdevperf.sock 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 908840 ']' 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.361 { 00:22:48.361 "params": { 00:22:48.361 "name": "Nvme$subsystem", 00:22:48.361 "trtype": "$TEST_TRANSPORT", 00:22:48.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.361 "adrfam": "ipv4", 00:22:48.361 "trsvcid": "$NVMF_PORT", 00:22:48.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.361 "hdgst": ${hdgst:-false}, 00:22:48.361 "ddgst": ${ddgst:-false} 00:22:48.361 }, 00:22:48.361 "method": "bdev_nvme_attach_controller" 00:22:48.361 } 00:22:48.361 EOF 00:22:48.361 )") 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.361 { 00:22:48.361 "params": { 00:22:48.361 "name": "Nvme$subsystem", 00:22:48.361 "trtype": "$TEST_TRANSPORT", 00:22:48.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.361 "adrfam": "ipv4", 00:22:48.361 "trsvcid": "$NVMF_PORT", 00:22:48.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.361 "hdgst": ${hdgst:-false}, 00:22:48.361 "ddgst": ${ddgst:-false} 00:22:48.361 }, 00:22:48.361 "method": "bdev_nvme_attach_controller" 00:22:48.361 } 00:22:48.361 EOF 00:22:48.361 )") 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.361 { 00:22:48.361 "params": { 00:22:48.361 "name": "Nvme$subsystem", 00:22:48.361 "trtype": "$TEST_TRANSPORT", 00:22:48.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.361 "adrfam": "ipv4", 00:22:48.361 "trsvcid": "$NVMF_PORT", 00:22:48.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.361 "hdgst": ${hdgst:-false}, 00:22:48.361 "ddgst": ${ddgst:-false} 00:22:48.361 }, 00:22:48.361 "method": "bdev_nvme_attach_controller" 00:22:48.361 } 00:22:48.361 EOF 00:22:48.361 )") 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.361 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.362 { 00:22:48.362 "params": { 00:22:48.362 "name": "Nvme$subsystem", 00:22:48.362 "trtype": "$TEST_TRANSPORT", 00:22:48.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.362 "adrfam": "ipv4", 00:22:48.362 "trsvcid": "$NVMF_PORT", 00:22:48.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.362 "hdgst": ${hdgst:-false}, 00:22:48.362 "ddgst": ${ddgst:-false} 00:22:48.362 }, 00:22:48.362 "method": "bdev_nvme_attach_controller" 00:22:48.362 } 00:22:48.362 EOF 00:22:48.362 )") 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.362 { 00:22:48.362 "params": { 00:22:48.362 "name": "Nvme$subsystem", 00:22:48.362 "trtype": "$TEST_TRANSPORT", 00:22:48.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.362 "adrfam": "ipv4", 00:22:48.362 "trsvcid": "$NVMF_PORT", 00:22:48.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.362 "hdgst": ${hdgst:-false}, 00:22:48.362 "ddgst": ${ddgst:-false} 00:22:48.362 }, 00:22:48.362 "method": "bdev_nvme_attach_controller" 00:22:48.362 } 00:22:48.362 EOF 00:22:48.362 )") 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.362 { 00:22:48.362 "params": { 00:22:48.362 "name": "Nvme$subsystem", 00:22:48.362 "trtype": "$TEST_TRANSPORT", 00:22:48.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.362 "adrfam": "ipv4", 00:22:48.362 "trsvcid": "$NVMF_PORT", 00:22:48.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.362 "hdgst": ${hdgst:-false}, 00:22:48.362 "ddgst": ${ddgst:-false} 00:22:48.362 }, 00:22:48.362 "method": "bdev_nvme_attach_controller" 00:22:48.362 } 00:22:48.362 EOF 00:22:48.362 )") 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.362 { 00:22:48.362 "params": { 00:22:48.362 "name": "Nvme$subsystem", 00:22:48.362 "trtype": "$TEST_TRANSPORT", 00:22:48.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.362 "adrfam": "ipv4", 00:22:48.362 "trsvcid": "$NVMF_PORT", 00:22:48.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.362 "hdgst": ${hdgst:-false}, 00:22:48.362 "ddgst": ${ddgst:-false} 00:22:48.362 }, 00:22:48.362 "method": "bdev_nvme_attach_controller" 00:22:48.362 } 00:22:48.362 EOF 00:22:48.362 )") 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.362 { 00:22:48.362 "params": { 00:22:48.362 "name": "Nvme$subsystem", 00:22:48.362 "trtype": "$TEST_TRANSPORT", 00:22:48.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.362 "adrfam": "ipv4", 00:22:48.362 "trsvcid": "$NVMF_PORT", 00:22:48.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.362 "hdgst": ${hdgst:-false}, 00:22:48.362 "ddgst": ${ddgst:-false} 00:22:48.362 }, 00:22:48.362 "method": "bdev_nvme_attach_controller" 00:22:48.362 } 00:22:48.362 EOF 00:22:48.362 )") 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.362 [2024-06-10 10:48:12.631463] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:22:48.362 [2024-06-10 10:48:12.631529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid908840 ] 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.362 { 00:22:48.362 "params": { 00:22:48.362 "name": "Nvme$subsystem", 00:22:48.362 "trtype": "$TEST_TRANSPORT", 00:22:48.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.362 "adrfam": "ipv4", 00:22:48.362 "trsvcid": "$NVMF_PORT", 00:22:48.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.362 "hdgst": ${hdgst:-false}, 00:22:48.362 "ddgst": ${ddgst:-false} 00:22:48.362 }, 00:22:48.362 "method": "bdev_nvme_attach_controller" 00:22:48.362 } 00:22:48.362 EOF 00:22:48.362 )") 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:48.362 { 00:22:48.362 "params": { 00:22:48.362 "name": "Nvme$subsystem", 00:22:48.362 "trtype": "$TEST_TRANSPORT", 00:22:48.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:48.362 "adrfam": "ipv4", 00:22:48.362 "trsvcid": "$NVMF_PORT", 00:22:48.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:48.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:48.362 "hdgst": ${hdgst:-false}, 00:22:48.362 "ddgst": ${ddgst:-false} 00:22:48.362 }, 00:22:48.362 "method": "bdev_nvme_attach_controller" 00:22:48.362 } 00:22:48.362 EOF 00:22:48.362 )") 00:22:48.362 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:48.623 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:48.623 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:48.623 10:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme1", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 },{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme2", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 },{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme3", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 },{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme4", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 },{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme5", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 },{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme6", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 },{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme7", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 },{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme8", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 },{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme9", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 },{ 00:22:48.623 "params": { 00:22:48.623 "name": "Nvme10", 00:22:48.623 "trtype": "tcp", 00:22:48.623 "traddr": "10.0.0.2", 00:22:48.623 "adrfam": "ipv4", 00:22:48.623 "trsvcid": "4420", 00:22:48.623 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:48.623 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:48.623 "hdgst": false, 00:22:48.623 "ddgst": false 00:22:48.623 }, 00:22:48.623 "method": "bdev_nvme_attach_controller" 00:22:48.623 }' 00:22:48.623 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.623 [2024-06-10 10:48:12.692168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.623 [2024-06-10 10:48:12.757455] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.009 Running I/O for 10 seconds... 00:22:50.009 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:50.009 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:22:50.009 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:50.009 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.009 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:50.270 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:50.532 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:50.794 10:48:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 908840 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 908840 ']' 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 908840 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:51.055 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 908840 00:22:51.317 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:51.317 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:51.317 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 908840' 00:22:51.317 killing process with pid 908840 00:22:51.317 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 908840 00:22:51.317 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 908840 00:22:51.317 Received shutdown signal, test time was about 1.262477 seconds 00:22:51.317 00:22:51.317 Latency(us) 00:22:51.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.317 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme1n1 : 1.24 154.91 9.68 0.00 0.00 408000.28 32549.55 335544.32 00:22:51.317 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme2n1 : 1.22 157.09 9.82 0.00 0.00 397419.52 26760.53 340787.20 00:22:51.317 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme3n1 : 1.23 208.13 13.01 0.00 0.00 294889.92 10376.53 342534.83 00:22:51.317 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme4n1 : 1.24 205.89 12.87 0.00 0.00 293690.99 13544.11 340787.20 00:22:51.317 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme5n1 : 1.26 201.33 12.58 0.00 0.00 295094.12 15619.41 316320.43 00:22:51.317 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme6n1 : 1.25 204.62 12.79 0.00 0.00 285905.71 26432.85 330301.44 00:22:51.317 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme7n1 : 1.25 205.29 12.83 0.00 0.00 280295.04 21736.11 358263.47 00:22:51.317 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme8n1 : 1.25 204.29 12.77 0.00 0.00 277001.17 24357.55 339039.57 00:22:51.317 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme9n1 : 1.26 203.71 12.73 0.00 0.00 273252.91 15182.51 342534.83 00:22:51.317 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:51.317 Verification LBA range: start 0x0 length 0x400 00:22:51.317 Nvme10n1 : 1.24 155.13 9.70 0.00 0.00 351942.26 24466.77 360011.09 00:22:51.317 =================================================================================================================== 00:22:51.317 Total : 1900.39 118.77 0.00 0.00 310083.08 10376.53 360011.09 00:22:51.317 10:48:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 908644 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.705 rmmod nvme_tcp 00:22:52.705 rmmod nvme_fabrics 00:22:52.705 rmmod nvme_keyring 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 908644 ']' 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 908644 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 908644 ']' 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 908644 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 908644 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 908644' 00:22:52.705 killing process with pid 908644 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 908644 00:22:52.705 [2024-06-10 10:48:16.691805] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 908644 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.705 10:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.254 10:48:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.254 00:22:55.254 real 0m8.154s 00:22:55.254 user 0m25.142s 00:22:55.254 sys 0m1.276s 00:22:55.254 10:48:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:55.254 10:48:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:55.254 ************************************ 00:22:55.254 END TEST nvmf_shutdown_tc2 00:22:55.254 ************************************ 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:55.254 ************************************ 00:22:55.254 START TEST nvmf_shutdown_tc3 00:22:55.254 ************************************ 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:55.254 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:55.254 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:55.254 Found net devices under 0000:31:00.0: cvl_0_0 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.254 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:55.255 Found net devices under 0000:31:00.1: cvl_0_1 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:22:55.255 00:22:55.255 --- 10.0.0.2 ping statistics --- 00:22:55.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.255 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:22:55.255 00:22:55.255 --- 10.0.0.1 ping statistics --- 00:22:55.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.255 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=910254 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 910254 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 910254 ']' 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:55.255 10:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.255 [2024-06-10 10:48:19.509832] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:22:55.255 [2024-06-10 10:48:19.509915] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.516 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.516 [2024-06-10 10:48:19.600376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.516 [2024-06-10 10:48:19.671334] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.516 [2024-06-10 10:48:19.671372] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.516 [2024-06-10 10:48:19.671378] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.516 [2024-06-10 10:48:19.671383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.516 [2024-06-10 10:48:19.671387] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.516 [2024-06-10 10:48:19.671510] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.516 [2024-06-10 10:48:19.671669] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.516 [2024-06-10 10:48:19.671802] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.516 [2024-06-10 10:48:19.671802] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.087 [2024-06-10 10:48:20.315316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.087 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:56.348 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:56.348 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.348 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.348 Malloc1 00:22:56.348 [2024-06-10 10:48:20.413813] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:56.348 [2024-06-10 10:48:20.414028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.348 Malloc2 00:22:56.348 Malloc3 00:22:56.348 Malloc4 00:22:56.348 Malloc5 00:22:56.348 Malloc6 00:22:56.348 Malloc7 00:22:56.610 Malloc8 00:22:56.610 Malloc9 00:22:56.610 Malloc10 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=910587 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 910587 /var/tmp/bdevperf.sock 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 910587 ']' 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.610 { 00:22:56.610 "params": { 00:22:56.610 "name": "Nvme$subsystem", 00:22:56.610 "trtype": "$TEST_TRANSPORT", 00:22:56.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.610 "adrfam": "ipv4", 00:22:56.610 "trsvcid": "$NVMF_PORT", 00:22:56.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.610 "hdgst": ${hdgst:-false}, 00:22:56.610 "ddgst": ${ddgst:-false} 00:22:56.610 }, 00:22:56.610 "method": "bdev_nvme_attach_controller" 00:22:56.610 } 00:22:56.610 EOF 00:22:56.610 )") 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.610 { 00:22:56.610 "params": { 00:22:56.610 "name": "Nvme$subsystem", 00:22:56.610 "trtype": "$TEST_TRANSPORT", 00:22:56.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.610 "adrfam": "ipv4", 00:22:56.610 "trsvcid": "$NVMF_PORT", 00:22:56.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.610 "hdgst": ${hdgst:-false}, 00:22:56.610 "ddgst": ${ddgst:-false} 00:22:56.610 }, 00:22:56.610 "method": "bdev_nvme_attach_controller" 00:22:56.610 } 00:22:56.610 EOF 00:22:56.610 )") 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.610 { 00:22:56.610 "params": { 00:22:56.610 "name": "Nvme$subsystem", 00:22:56.610 "trtype": "$TEST_TRANSPORT", 00:22:56.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.610 "adrfam": "ipv4", 00:22:56.610 "trsvcid": "$NVMF_PORT", 00:22:56.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.610 "hdgst": ${hdgst:-false}, 00:22:56.610 "ddgst": ${ddgst:-false} 00:22:56.610 }, 00:22:56.610 "method": "bdev_nvme_attach_controller" 00:22:56.610 } 00:22:56.610 EOF 00:22:56.610 )") 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.610 { 00:22:56.610 "params": { 00:22:56.610 "name": "Nvme$subsystem", 00:22:56.610 "trtype": "$TEST_TRANSPORT", 00:22:56.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.610 "adrfam": "ipv4", 00:22:56.610 "trsvcid": "$NVMF_PORT", 00:22:56.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.610 "hdgst": ${hdgst:-false}, 00:22:56.610 "ddgst": ${ddgst:-false} 00:22:56.610 }, 00:22:56.610 "method": "bdev_nvme_attach_controller" 00:22:56.610 } 00:22:56.610 EOF 00:22:56.610 )") 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.610 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.610 { 00:22:56.610 "params": { 00:22:56.610 "name": "Nvme$subsystem", 00:22:56.610 "trtype": "$TEST_TRANSPORT", 00:22:56.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.610 "adrfam": "ipv4", 00:22:56.610 "trsvcid": "$NVMF_PORT", 00:22:56.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.611 "hdgst": ${hdgst:-false}, 00:22:56.611 "ddgst": ${ddgst:-false} 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 } 00:22:56.611 EOF 00:22:56.611 )") 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.611 { 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme$subsystem", 00:22:56.611 "trtype": "$TEST_TRANSPORT", 00:22:56.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "$NVMF_PORT", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.611 "hdgst": ${hdgst:-false}, 00:22:56.611 "ddgst": ${ddgst:-false} 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 } 00:22:56.611 EOF 00:22:56.611 )") 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.611 [2024-06-10 10:48:20.854060] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:22:56.611 [2024-06-10 10:48:20.854113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910587 ] 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.611 { 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme$subsystem", 00:22:56.611 "trtype": "$TEST_TRANSPORT", 00:22:56.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "$NVMF_PORT", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.611 "hdgst": ${hdgst:-false}, 00:22:56.611 "ddgst": ${ddgst:-false} 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 } 00:22:56.611 EOF 00:22:56.611 )") 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.611 { 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme$subsystem", 00:22:56.611 "trtype": "$TEST_TRANSPORT", 00:22:56.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "$NVMF_PORT", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.611 "hdgst": ${hdgst:-false}, 00:22:56.611 "ddgst": ${ddgst:-false} 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 } 00:22:56.611 EOF 00:22:56.611 )") 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.611 { 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme$subsystem", 00:22:56.611 "trtype": "$TEST_TRANSPORT", 00:22:56.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "$NVMF_PORT", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.611 "hdgst": ${hdgst:-false}, 00:22:56.611 "ddgst": ${ddgst:-false} 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 } 00:22:56.611 EOF 00:22:56.611 )") 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:56.611 { 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme$subsystem", 00:22:56.611 "trtype": "$TEST_TRANSPORT", 00:22:56.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "$NVMF_PORT", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.611 "hdgst": ${hdgst:-false}, 00:22:56.611 "ddgst": ${ddgst:-false} 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 } 00:22:56.611 EOF 00:22:56.611 )") 00:22:56.611 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:56.611 10:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme1", 00:22:56.611 "trtype": "tcp", 00:22:56.611 "traddr": "10.0.0.2", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "4420", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.611 "hdgst": false, 00:22:56.611 "ddgst": false 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 },{ 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme2", 00:22:56.611 "trtype": "tcp", 00:22:56.611 "traddr": "10.0.0.2", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "4420", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:56.611 "hdgst": false, 00:22:56.611 "ddgst": false 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 },{ 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme3", 00:22:56.611 "trtype": "tcp", 00:22:56.611 "traddr": "10.0.0.2", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "4420", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:56.611 "hdgst": false, 00:22:56.611 "ddgst": false 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 },{ 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme4", 00:22:56.611 "trtype": "tcp", 00:22:56.611 "traddr": "10.0.0.2", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "4420", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:56.611 "hdgst": false, 00:22:56.611 "ddgst": false 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 },{ 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme5", 00:22:56.611 "trtype": "tcp", 00:22:56.611 "traddr": "10.0.0.2", 00:22:56.611 "adrfam": "ipv4", 00:22:56.611 "trsvcid": "4420", 00:22:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:56.611 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:56.611 "hdgst": false, 00:22:56.611 "ddgst": false 00:22:56.611 }, 00:22:56.611 "method": "bdev_nvme_attach_controller" 00:22:56.611 },{ 00:22:56.611 "params": { 00:22:56.611 "name": "Nvme6", 00:22:56.611 "trtype": "tcp", 00:22:56.611 "traddr": "10.0.0.2", 00:22:56.612 "adrfam": "ipv4", 00:22:56.612 "trsvcid": "4420", 00:22:56.612 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:56.612 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:56.612 "hdgst": false, 00:22:56.612 "ddgst": false 00:22:56.612 }, 00:22:56.612 "method": "bdev_nvme_attach_controller" 00:22:56.612 },{ 00:22:56.612 "params": { 00:22:56.612 "name": "Nvme7", 00:22:56.612 "trtype": "tcp", 00:22:56.612 "traddr": "10.0.0.2", 00:22:56.612 "adrfam": "ipv4", 00:22:56.612 "trsvcid": "4420", 00:22:56.612 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:56.612 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:56.612 "hdgst": false, 00:22:56.612 "ddgst": false 00:22:56.612 }, 00:22:56.612 "method": "bdev_nvme_attach_controller" 00:22:56.612 },{ 00:22:56.612 "params": { 00:22:56.612 "name": "Nvme8", 00:22:56.612 "trtype": "tcp", 00:22:56.612 "traddr": "10.0.0.2", 00:22:56.612 "adrfam": "ipv4", 00:22:56.612 "trsvcid": "4420", 00:22:56.612 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:56.612 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:56.612 "hdgst": false, 00:22:56.612 "ddgst": false 00:22:56.612 }, 00:22:56.612 "method": "bdev_nvme_attach_controller" 00:22:56.612 },{ 00:22:56.612 "params": { 00:22:56.612 "name": "Nvme9", 00:22:56.612 "trtype": "tcp", 00:22:56.612 "traddr": "10.0.0.2", 00:22:56.612 "adrfam": "ipv4", 00:22:56.612 "trsvcid": "4420", 00:22:56.612 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:56.612 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:56.612 "hdgst": false, 00:22:56.612 "ddgst": false 00:22:56.612 }, 00:22:56.612 "method": "bdev_nvme_attach_controller" 00:22:56.612 },{ 00:22:56.612 "params": { 00:22:56.612 "name": "Nvme10", 00:22:56.612 "trtype": "tcp", 00:22:56.612 "traddr": "10.0.0.2", 00:22:56.612 "adrfam": "ipv4", 00:22:56.612 "trsvcid": "4420", 00:22:56.612 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:56.612 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:56.612 "hdgst": false, 00:22:56.612 "ddgst": false 00:22:56.612 }, 00:22:56.612 "method": "bdev_nvme_attach_controller" 00:22:56.612 }' 00:22:56.872 [2024-06-10 10:48:20.915056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.872 [2024-06-10 10:48:20.980084] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.257 Running I/O for 10 seconds... 00:22:58.257 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:58.257 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:22:58.257 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:58.257 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.257 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:58.517 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:58.778 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:58.778 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:58.778 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:58.778 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:58.778 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:58.778 10:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.778 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:58.778 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:58.778 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:58.779 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:59.039 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:59.039 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:59.039 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:59.039 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:59.039 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:59.039 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 910254 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 910254 ']' 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 910254 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 910254 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 910254' 00:22:59.317 killing process with pid 910254 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 910254 00:22:59.317 [2024-06-10 10:48:23.390964] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:59.317 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 910254 00:22:59.317 [2024-06-10 10:48:23.395161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395192] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395197] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395203] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395212] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395239] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395269] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395283] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.317 [2024-06-10 10:48:23.395309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395344] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395348] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395361] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395369] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395373] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395382] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395386] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395390] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395395] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395405] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395413] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395422] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395426] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395430] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395434] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395439] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395443] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395447] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395452] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395456] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395461] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.395469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cce20 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396415] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396438] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396458] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396463] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396467] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396472] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396476] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396484] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396489] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396494] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396498] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396503] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396507] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396516] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396520] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396525] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396529] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396534] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396538] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396552] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396556] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396560] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.318 [2024-06-10 10:48:23.396565] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396578] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396583] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396592] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396596] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396611] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396615] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396620] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396628] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396633] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396637] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396641] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396646] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396650] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396654] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396658] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396663] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396689] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396693] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396702] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396706] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396710] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.396715] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2260390 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397615] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397637] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397646] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397651] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397656] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397660] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397666] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397679] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397697] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397702] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397706] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397716] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397725] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397742] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397747] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397751] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397762] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397766] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.319 [2024-06-10 10:48:23.397771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397775] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397807] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397811] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397820] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397828] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397832] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397837] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397860] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397873] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397878] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397883] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397905] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.397909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd2c0 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398880] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398907] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398912] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398925] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398938] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398943] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398947] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398960] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398969] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398991] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.398995] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.399000] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.399004] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.399009] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.399013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.399017] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.399022] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.320 [2024-06-10 10:48:23.399026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399066] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399070] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399077] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399081] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399086] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399095] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399112] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399121] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399130] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399143] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd760 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399820] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399834] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399839] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399848] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399866] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399914] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399967] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399972] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399976] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399994] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.399998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.400003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.400007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.400013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.400018] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.400022] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.400027] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.400031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.400036] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.321 [2024-06-10 10:48:23.400040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400081] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400111] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cdc20 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400608] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400612] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400617] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400626] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400639] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400643] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400656] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400660] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400664] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400678] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400682] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400687] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400691] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400704] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400719] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400723] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400727] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400737] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400741] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400746] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400750] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400754] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400759] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400767] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.322 [2024-06-10 10:48:23.400771] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400776] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400797] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400823] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400827] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400832] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400840] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400844] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400850] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400855] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400859] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.400876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce0c0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.401469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce560 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.401479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce560 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.401483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce560 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402492] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402497] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402502] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402507] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402516] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402520] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402525] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402529] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402534] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402539] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402543] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402552] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402569] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402573] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402578] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402583] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402588] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402592] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402597] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402601] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402610] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402614] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402619] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402623] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402628] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402632] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402637] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402651] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402656] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402660] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402665] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402673] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402678] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402687] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402697] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402702] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402706] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.323 [2024-06-10 10:48:23.402715] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.402719] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.402724] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.402729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.402733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.402738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.406518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4a5610 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.406639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6c4c0 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.406725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4f10 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.406808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3b20 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.406888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa70c20 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.406974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.406990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.406997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.407005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.407012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.407020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.407027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.407033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf15e0 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.407064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.407076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.407089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.407100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.407109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.407116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.407124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.407131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.407138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1f60 is same with the state(5) to be set 00:22:59.324 [2024-06-10 10:48:23.407159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.407167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.407177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.324 [2024-06-10 10:48:23.407184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.324 [2024-06-10 10:48:23.407194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.325 [2024-06-10 10:48:23.407201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.325 [2024-06-10 10:48:23.407209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.325 [2024-06-10 10:48:23.407216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.325 [2024-06-10 10:48:23.407222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd300 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.407251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.325 [2024-06-10 10:48:23.407264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.325 [2024-06-10 10:48:23.407277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.325 [2024-06-10 10:48:23.407284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.325 [2024-06-10 10:48:23.407292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.325 [2024-06-10 10:48:23.407299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.325 [2024-06-10 10:48:23.407306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.325 [2024-06-10 10:48:23.407313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.325 [2024-06-10 10:48:23.407320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0abf0 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.411765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.411786] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.411793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.411799] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.411804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.411809] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.411813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.411818] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ceec0 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412330] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412356] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412360] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412369] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412413] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412418] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412422] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412426] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412441] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412454] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412458] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412463] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412467] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412471] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412476] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412480] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412484] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412489] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412493] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412497] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412502] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412510] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412514] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412519] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412527] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412532] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412536] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.325 [2024-06-10 10:48:23.412541] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.326 [2024-06-10 10:48:23.412545] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.326 [2024-06-10 10:48:23.412549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.326 [2024-06-10 10:48:23.412554] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.326 [2024-06-10 10:48:23.412559] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.326 [2024-06-10 10:48:23.412563] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.326 [2024-06-10 10:48:23.412567] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.326 [2024-06-10 10:48:23.412571] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cf360 is same with the state(5) to be set 00:22:59.326 [2024-06-10 10:48:23.426977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.326 [2024-06-10 10:48:23.427549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.326 [2024-06-10 10:48:23.427559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.327 [2024-06-10 10:48:23.427989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.327 [2024-06-10 10:48:23.427998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428123] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x99a280 was disconnected and freed. reset controller. 00:22:59.328 [2024-06-10 10:48:23.428202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.328 [2024-06-10 10:48:23.428471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.328 [2024-06-10 10:48:23.428478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.428986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.428995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.429002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.429011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.429018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.429027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.429034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.429043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.429050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.329 [2024-06-10 10:48:23.429059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.329 [2024-06-10 10:48:23.429066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.429248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429297] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x99b7a0 was disconnected and freed. reset controller. 00:22:59.330 [2024-06-10 10:48:23.429497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4a5610 (9): Bad file descriptor 00:22:59.330 [2024-06-10 10:48:23.429518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6c4c0 (9): Bad file descriptor 00:22:59.330 [2024-06-10 10:48:23.429533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c4f10 (9): Bad file descriptor 00:22:59.330 [2024-06-10 10:48:23.429549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c3b20 (9): Bad file descriptor 00:22:59.330 [2024-06-10 10:48:23.429564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa70c20 (9): Bad file descriptor 00:22:59.330 [2024-06-10 10:48:23.429576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf15e0 (9): Bad file descriptor 00:22:59.330 [2024-06-10 10:48:23.429602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.330 [2024-06-10 10:48:23.429612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.330 [2024-06-10 10:48:23.429627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.330 [2024-06-10 10:48:23.429642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.330 [2024-06-10 10:48:23.429656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.429666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb09f70 is same with the state(5) to be set 00:22:59.330 [2024-06-10 10:48:23.429683] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a1f60 (9): Bad file descriptor 00:22:59.330 [2024-06-10 10:48:23.429697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd300 (9): Bad file descriptor 00:22:59.330 [2024-06-10 10:48:23.429712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0abf0 (9): Bad file descriptor 00:22:59.330 [2024-06-10 10:48:23.430042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.330 [2024-06-10 10:48:23.430301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.330 [2024-06-10 10:48:23.430310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.331 [2024-06-10 10:48:23.430903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.331 [2024-06-10 10:48:23.430910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.430919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.430926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.430935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.430942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.430951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.430958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.430967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.430973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.430982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.430989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.430998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.431005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.431014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.431021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.431030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.431037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.431046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.431055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.431064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.431071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.431080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.431087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.431096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.431103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.431111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb40700 is same with the state(5) to be set 00:22:59.332 [2024-06-10 10:48:23.431152] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb40700 was disconnected and freed. reset controller. 00:22:59.332 [2024-06-10 10:48:23.435105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:59.332 [2024-06-10 10:48:23.435133] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:59.332 [2024-06-10 10:48:23.435960] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.332 [2024-06-10 10:48:23.435984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:59.332 [2024-06-10 10:48:23.436500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.332 [2024-06-10 10:48:23.436539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf15e0 with addr=10.0.0.2, port=4420 00:22:59.332 [2024-06-10 10:48:23.436552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf15e0 is same with the state(5) to be set 00:22:59.332 [2024-06-10 10:48:23.436802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.332 [2024-06-10 10:48:23.436813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4a5610 with addr=10.0.0.2, port=4420 00:22:59.332 [2024-06-10 10:48:23.436820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4a5610 is same with the state(5) to be set 00:22:59.332 [2024-06-10 10:48:23.436878] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.332 [2024-06-10 10:48:23.436920] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.332 [2024-06-10 10:48:23.436958] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.332 [2024-06-10 10:48:23.437288] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.332 [2024-06-10 10:48:23.437328] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:59.332 [2024-06-10 10:48:23.437607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.332 [2024-06-10 10:48:23.437621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c4f10 with addr=10.0.0.2, port=4420 00:22:59.332 [2024-06-10 10:48:23.437628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4f10 is same with the state(5) to be set 00:22:59.332 [2024-06-10 10:48:23.437640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf15e0 (9): Bad file descriptor 00:22:59.332 [2024-06-10 10:48:23.437650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4a5610 (9): Bad file descriptor 00:22:59.332 [2024-06-10 10:48:23.437680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.332 [2024-06-10 10:48:23.437933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.332 [2024-06-10 10:48:23.437942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.437949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.437958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.437965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.437974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.437981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.437990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.437997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.333 [2024-06-10 10:48:23.438445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.333 [2024-06-10 10:48:23.438454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.438741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.438749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3d720 is same with the state(5) to be set 00:22:59.334 [2024-06-10 10:48:23.438792] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb3d720 was disconnected and freed. reset controller. 00:22:59.334 [2024-06-10 10:48:23.438878] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c4f10 (9): Bad file descriptor 00:22:59.334 [2024-06-10 10:48:23.438890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:59.334 [2024-06-10 10:48:23.438897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:59.334 [2024-06-10 10:48:23.438905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:59.334 [2024-06-10 10:48:23.438917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:59.334 [2024-06-10 10:48:23.438923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:59.334 [2024-06-10 10:48:23.438930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:59.334 [2024-06-10 10:48:23.440189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.334 [2024-06-10 10:48:23.440202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.334 [2024-06-10 10:48:23.440210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.334 [2024-06-10 10:48:23.440231] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:59.334 [2024-06-10 10:48:23.440239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:59.334 [2024-06-10 10:48:23.440253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:59.334 [2024-06-10 10:48:23.440297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb09f70 (9): Bad file descriptor 00:22:59.334 [2024-06-10 10:48:23.440368] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.334 [2024-06-10 10:48:23.440762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.334 [2024-06-10 10:48:23.440774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a1f60 with addr=10.0.0.2, port=4420 00:22:59.334 [2024-06-10 10:48:23.440781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1f60 is same with the state(5) to be set 00:22:59.334 [2024-06-10 10:48:23.441072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.441093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.441109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.441129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.441145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.441161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.441177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.441193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.441209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.334 [2024-06-10 10:48:23.441226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.334 [2024-06-10 10:48:23.441233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.335 [2024-06-10 10:48:23.441837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.335 [2024-06-10 10:48:23.441843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.441859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.441875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.441891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.441907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.441924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.441940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.441957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.441974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.441990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.441999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.442006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.442015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.442022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.442031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.442037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.442046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.442053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.442062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.442069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.442078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.442085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.442094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.442101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.442110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.442117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.442125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad4040 is same with the state(5) to be set 00:22:59.336 [2024-06-10 10:48:23.443400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-06-10 10:48:23.443643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-06-10 10:48:23.443652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.443989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.443996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-06-10 10:48:23.444190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-06-10 10:48:23.444199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.444435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.444444] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5520 is same with the state(5) to be set 00:22:59.338 [2024-06-10 10:48:23.445719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-06-10 10:48:23.445955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-06-10 10:48:23.445962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.445971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.445978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.445987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.445994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.446389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.446399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.451236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.451293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.451302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.451311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.451319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.451329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.451336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.451345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.451352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.451361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.451368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.451377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.451384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-06-10 10:48:23.451393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-06-10 10:48:23.451400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.451657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.451666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3f220 is same with the state(5) to be set 00:22:59.340 [2024-06-10 10:48:23.453039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.340 [2024-06-10 10:48:23.453371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.340 [2024-06-10 10:48:23.453378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.341 [2024-06-10 10:48:23.453880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.341 [2024-06-10 10:48:23.453887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.453896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.453903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.453912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.453920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.453929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.453936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.453945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.453952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.453962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.453969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.453979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.453986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.453994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.454001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.454010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.454018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.454026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.454034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.454042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.454050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.454059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.454067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.454076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.454084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.454092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.454099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.454107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ccc0 is same with the state(5) to be set 00:22:59.342 [2024-06-10 10:48:23.455397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.342 [2024-06-10 10:48:23.455745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.342 [2024-06-10 10:48:23.455755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.455985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.455992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.343 [2024-06-10 10:48:23.456168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.343 [2024-06-10 10:48:23.456176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.456488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.456496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb36780 is same with the state(5) to be set 00:22:59.344 [2024-06-10 10:48:23.459043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:59.344 [2024-06-10 10:48:23.459078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:59.344 [2024-06-10 10:48:23.459089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:59.344 [2024-06-10 10:48:23.459098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:59.344 [2024-06-10 10:48:23.459147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a1f60 (9): Bad file descriptor 00:22:59.344 [2024-06-10 10:48:23.459202] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.344 [2024-06-10 10:48:23.459225] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.344 [2024-06-10 10:48:23.459316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:59.344 [2024-06-10 10:48:23.459733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.344 [2024-06-10 10:48:23.459749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb6c4c0 with addr=10.0.0.2, port=4420 00:22:59.344 [2024-06-10 10:48:23.459759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6c4c0 is same with the state(5) to be set 00:22:59.344 [2024-06-10 10:48:23.460126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.344 [2024-06-10 10:48:23.460137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cd300 with addr=10.0.0.2, port=4420 00:22:59.344 [2024-06-10 10:48:23.460144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9cd300 is same with the state(5) to be set 00:22:59.344 [2024-06-10 10:48:23.460535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.344 [2024-06-10 10:48:23.460545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c3b20 with addr=10.0.0.2, port=4420 00:22:59.344 [2024-06-10 10:48:23.460557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3b20 is same with the state(5) to be set 00:22:59.344 [2024-06-10 10:48:23.460945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.344 [2024-06-10 10:48:23.460955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0abf0 with addr=10.0.0.2, port=4420 00:22:59.344 [2024-06-10 10:48:23.460963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0abf0 is same with the state(5) to be set 00:22:59.344 [2024-06-10 10:48:23.460971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.344 [2024-06-10 10:48:23.460977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:59.344 [2024-06-10 10:48:23.460987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.344 [2024-06-10 10:48:23.462102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.462117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.462133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.462141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.462151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.462159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.462170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.462178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.462187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.462195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.462204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.344 [2024-06-10 10:48:23.462211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.344 [2024-06-10 10:48:23.462220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.345 [2024-06-10 10:48:23.462843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.345 [2024-06-10 10:48:23.462850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.462860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.462868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.462877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.462884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.462893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.462901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.462911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.462918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.462929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.462936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.462945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.462953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.462962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.462969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.462978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.462985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.462995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.463002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.463012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.463019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.463028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.463035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.463044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.463051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.463061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.463068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.463077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.463089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.463099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.463106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.463115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.463123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.463131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.463140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.470617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.470653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.470665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.470673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.470682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.470690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.470700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.346 [2024-06-10 10:48:23.470707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.346 [2024-06-10 10:48:23.470716] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99df90 is same with the state(5) to be set 00:22:59.346 [2024-06-10 10:48:23.472634] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:59.346 [2024-06-10 10:48:23.472660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:59.346 [2024-06-10 10:48:23.472671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:59.346 [2024-06-10 10:48:23.472681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.346 task offset: 24320 on job bdev=Nvme6n1 fails 00:22:59.346 00:22:59.346 Latency(us) 00:22:59.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.346 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.346 Job: Nvme1n1 ended in about 0.96 seconds with error 00:22:59.346 Verification LBA range: start 0x0 length 0x400 00:22:59.346 Nvme1n1 : 0.96 200.92 12.56 66.97 0.00 236235.31 19551.57 244667.73 00:22:59.346 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.346 Job: Nvme2n1 ended in about 0.96 seconds with error 00:22:59.346 Verification LBA range: start 0x0 length 0x400 00:22:59.346 Nvme2n1 : 0.96 133.50 8.34 66.75 0.00 309790.72 22719.15 270882.13 00:22:59.346 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.346 Job: Nvme3n1 ended in about 0.96 seconds with error 00:22:59.346 Verification LBA range: start 0x0 length 0x400 00:22:59.346 Nvme3n1 : 0.96 199.78 12.49 66.59 0.00 228095.79 21736.11 222822.40 00:22:59.346 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.346 Job: Nvme4n1 ended in about 0.97 seconds with error 00:22:59.346 Verification LBA range: start 0x0 length 0x400 00:22:59.346 Nvme4n1 : 0.97 198.29 12.39 66.10 0.00 225126.83 15400.96 246415.36 00:22:59.346 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.346 Job: Nvme5n1 ended in about 0.95 seconds with error 00:22:59.346 Verification LBA range: start 0x0 length 0x400 00:22:59.346 Nvme5n1 : 0.95 199.92 12.49 67.34 0.00 217732.50 7099.73 269134.51 00:22:59.346 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.346 Job: Nvme6n1 ended in about 0.95 seconds with error 00:22:59.346 Verification LBA range: start 0x0 length 0x400 00:22:59.346 Nvme6n1 : 0.95 200.47 12.53 67.53 0.00 212264.61 16820.91 242920.11 00:22:59.346 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.346 Job: Nvme7n1 ended in about 0.95 seconds with error 00:22:59.346 Verification LBA range: start 0x0 length 0x400 00:22:59.346 Nvme7n1 : 0.95 202.33 12.65 67.44 0.00 206162.35 23592.96 225443.84 00:22:59.346 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.346 Job: Nvme8n1 ended in about 0.97 seconds with error 00:22:59.346 Verification LBA range: start 0x0 length 0x400 00:22:59.346 Nvme8n1 : 0.97 197.79 12.36 65.93 0.00 206867.63 41069.23 219327.15 00:22:59.346 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.346 Job: Nvme9n1 ended in about 0.99 seconds with error 00:22:59.346 Verification LBA range: start 0x0 length 0x400 00:22:59.346 Nvme9n1 : 0.99 129.64 8.10 64.82 0.00 275154.20 21299.20 270882.13 00:22:59.346 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:59.347 Job: Nvme10n1 ended in about 0.97 seconds with error 00:22:59.347 Verification LBA range: start 0x0 length 0x400 00:22:59.347 Nvme10n1 : 0.97 131.54 8.22 65.77 0.00 264227.84 22500.69 269134.51 00:22:59.347 =================================================================================================================== 00:22:59.347 Total : 1794.17 112.14 665.24 0.00 234558.94 7099.73 270882.13 00:22:59.347 [2024-06-10 10:48:23.498587] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:59.347 [2024-06-10 10:48:23.498632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:59.347 [2024-06-10 10:48:23.499110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.347 [2024-06-10 10:48:23.499129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa70c20 with addr=10.0.0.2, port=4420 00:22:59.347 [2024-06-10 10:48:23.499139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa70c20 is same with the state(5) to be set 00:22:59.347 [2024-06-10 10:48:23.499154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6c4c0 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.499166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd300 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.499176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c3b20 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.499186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0abf0 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.499227] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.499247] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.499257] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.499268] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.499279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa70c20 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.499622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.347 [2024-06-10 10:48:23.499637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4a5610 with addr=10.0.0.2, port=4420 00:22:59.347 [2024-06-10 10:48:23.499646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4a5610 is same with the state(5) to be set 00:22:59.347 [2024-06-10 10:48:23.500029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.347 [2024-06-10 10:48:23.500040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf15e0 with addr=10.0.0.2, port=4420 00:22:59.347 [2024-06-10 10:48:23.500047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf15e0 is same with the state(5) to be set 00:22:59.347 [2024-06-10 10:48:23.500315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.347 [2024-06-10 10:48:23.500328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c4f10 with addr=10.0.0.2, port=4420 00:22:59.347 [2024-06-10 10:48:23.500335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4f10 is same with the state(5) to be set 00:22:59.347 [2024-06-10 10:48:23.501054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.347 [2024-06-10 10:48:23.501065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb09f70 with addr=10.0.0.2, port=4420 00:22:59.347 [2024-06-10 10:48:23.501073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb09f70 is same with the state(5) to be set 00:22:59.347 [2024-06-10 10:48:23.501083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.501091] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.501101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:59.347 [2024-06-10 10:48:23.501114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.501121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.501129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:59.347 [2024-06-10 10:48:23.501140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.501147] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.501155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:59.347 [2024-06-10 10:48:23.501166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.501174] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.501181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:59.347 [2024-06-10 10:48:23.501201] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.501213] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.501224] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.501235] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.501250] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.501261] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:59.347 [2024-06-10 10:48:23.501585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.347 [2024-06-10 10:48:23.501613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.347 [2024-06-10 10:48:23.501622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.347 [2024-06-10 10:48:23.501628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.347 [2024-06-10 10:48:23.501634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.347 [2024-06-10 10:48:23.501649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4a5610 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.501662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf15e0 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.501672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c4f10 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.501681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb09f70 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.501689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.501696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.501703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:59.347 [2024-06-10 10:48:23.501973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.347 [2024-06-10 10:48:23.502381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.347 [2024-06-10 10:48:23.502394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a1f60 with addr=10.0.0.2, port=4420 00:22:59.347 [2024-06-10 10:48:23.502401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a1f60 is same with the state(5) to be set 00:22:59.347 [2024-06-10 10:48:23.502409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.502415] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.502423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:59.347 [2024-06-10 10:48:23.502434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.502440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.502447] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:59.347 [2024-06-10 10:48:23.502456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.502462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.502469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:59.347 [2024-06-10 10:48:23.502478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.502485] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.502491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:59.347 [2024-06-10 10:48:23.502526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.347 [2024-06-10 10:48:23.502534] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.347 [2024-06-10 10:48:23.502540] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.347 [2024-06-10 10:48:23.502546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.347 [2024-06-10 10:48:23.502554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a1f60 (9): Bad file descriptor 00:22:59.347 [2024-06-10 10:48:23.502583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.347 [2024-06-10 10:48:23.502590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:59.347 [2024-06-10 10:48:23.502598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.348 [2024-06-10 10:48:23.502626] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.609 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:59.609 10:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 910587 00:23:00.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (910587) - No such process 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:00.552 rmmod nvme_tcp 00:23:00.552 rmmod nvme_fabrics 00:23:00.552 rmmod nvme_keyring 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.552 10:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.097 10:48:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:03.097 00:23:03.097 real 0m7.754s 00:23:03.097 user 0m18.808s 00:23:03.097 sys 0m1.218s 00:23:03.097 10:48:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:03.097 10:48:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.097 ************************************ 00:23:03.097 END TEST nvmf_shutdown_tc3 00:23:03.097 ************************************ 00:23:03.097 10:48:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:03.097 00:23:03.097 real 0m33.002s 00:23:03.097 user 1m18.067s 00:23:03.097 sys 0m9.485s 00:23:03.097 10:48:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:03.097 10:48:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:03.097 ************************************ 00:23:03.097 END TEST nvmf_shutdown 00:23:03.097 ************************************ 00:23:03.097 10:48:26 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:23:03.097 10:48:26 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:03.097 10:48:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:03.097 10:48:26 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:23:03.097 10:48:26 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:03.097 10:48:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:03.097 10:48:26 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:23:03.097 10:48:26 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:03.097 10:48:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:03.097 10:48:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:03.097 10:48:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:03.097 ************************************ 00:23:03.097 START TEST nvmf_multicontroller 00:23:03.097 ************************************ 00:23:03.097 10:48:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:03.097 * Looking for test storage... 00:23:03.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.097 10:48:27 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:03.098 10:48:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:09.837 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.837 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.837 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.837 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.837 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.837 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.837 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:09.838 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:09.838 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:09.838 Found net devices under 0000:31:00.0: cvl_0_0 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:09.838 Found net devices under 0000:31:00.1: cvl_0_1 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.838 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:10.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.755 ms 00:23:10.099 00:23:10.099 --- 10.0.0.2 ping statistics --- 00:23:10.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.099 rtt min/avg/max/mdev = 0.755/0.755/0.755/0.000 ms 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:23:10.099 00:23:10.099 --- 10.0.0.1 ping statistics --- 00:23:10.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.099 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=915707 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 915707 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 915707 ']' 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:10.099 10:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:10.360 [2024-06-10 10:48:34.436361] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:23:10.360 [2024-06-10 10:48:34.436412] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.360 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.360 [2024-06-10 10:48:34.520147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:10.360 [2024-06-10 10:48:34.609238] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.360 [2024-06-10 10:48:34.609325] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.360 [2024-06-10 10:48:34.609333] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.360 [2024-06-10 10:48:34.609340] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.360 [2024-06-10 10:48:34.609346] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.360 [2024-06-10 10:48:34.609477] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.360 [2024-06-10 10:48:34.609640] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.360 [2024-06-10 10:48:34.609642] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.931 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:10.931 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:10.931 10:48:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.931 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:10.931 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 [2024-06-10 10:48:35.254675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 Malloc0 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 [2024-06-10 10:48:35.322521] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:11.194 [2024-06-10 10:48:35.322739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 [2024-06-10 10:48:35.334663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 Malloc1 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=915807 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 915807 /var/tmp/bdevperf.sock 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 915807 ']' 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:11.194 10:48:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.138 NVMe0n1 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.138 1 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.138 request: 00:23:12.138 { 00:23:12.138 "name": "NVMe0", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:12.138 "hostaddr": "10.0.0.2", 00:23:12.138 "hostsvcid": "60000", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.138 "method": "bdev_nvme_attach_controller", 00:23:12.138 "req_id": 1 00:23:12.138 } 00:23:12.138 Got JSON-RPC error response 00:23:12.138 response: 00:23:12.138 { 00:23:12.138 "code": -114, 00:23:12.138 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:12.138 } 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:12.138 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.139 request: 00:23:12.139 { 00:23:12.139 "name": "NVMe0", 00:23:12.139 "trtype": "tcp", 00:23:12.139 "traddr": "10.0.0.2", 00:23:12.139 "hostaddr": "10.0.0.2", 00:23:12.139 "hostsvcid": "60000", 00:23:12.139 "adrfam": "ipv4", 00:23:12.139 "trsvcid": "4420", 00:23:12.139 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:12.139 "method": "bdev_nvme_attach_controller", 00:23:12.139 "req_id": 1 00:23:12.139 } 00:23:12.139 Got JSON-RPC error response 00:23:12.139 response: 00:23:12.139 { 00:23:12.139 "code": -114, 00:23:12.139 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:12.139 } 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.139 request: 00:23:12.139 { 00:23:12.139 "name": "NVMe0", 00:23:12.139 "trtype": "tcp", 00:23:12.139 "traddr": "10.0.0.2", 00:23:12.139 "hostaddr": "10.0.0.2", 00:23:12.139 "hostsvcid": "60000", 00:23:12.139 "adrfam": "ipv4", 00:23:12.139 "trsvcid": "4420", 00:23:12.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.139 "multipath": "disable", 00:23:12.139 "method": "bdev_nvme_attach_controller", 00:23:12.139 "req_id": 1 00:23:12.139 } 00:23:12.139 Got JSON-RPC error response 00:23:12.139 response: 00:23:12.139 { 00:23:12.139 "code": -114, 00:23:12.139 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:12.139 } 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.139 request: 00:23:12.139 { 00:23:12.139 "name": "NVMe0", 00:23:12.139 "trtype": "tcp", 00:23:12.139 "traddr": "10.0.0.2", 00:23:12.139 "hostaddr": "10.0.0.2", 00:23:12.139 "hostsvcid": "60000", 00:23:12.139 "adrfam": "ipv4", 00:23:12.139 "trsvcid": "4420", 00:23:12.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.139 "multipath": "failover", 00:23:12.139 "method": "bdev_nvme_attach_controller", 00:23:12.139 "req_id": 1 00:23:12.139 } 00:23:12.139 Got JSON-RPC error response 00:23:12.139 response: 00:23:12.139 { 00:23:12.139 "code": -114, 00:23:12.139 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:12.139 } 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.139 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.399 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.399 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:12.399 10:48:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.785 0 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 915807 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 915807 ']' 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 915807 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 915807 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 915807' 00:23:13.785 killing process with pid 915807 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 915807 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 915807 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:23:13.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:13.785 [2024-06-10 10:48:35.452417] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:23:13.785 [2024-06-10 10:48:35.452469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915807 ] 00:23:13.785 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.785 [2024-06-10 10:48:35.511140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.785 [2024-06-10 10:48:35.575624] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.785 [2024-06-10 10:48:36.582998] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 3ceebab3-2905-4b4b-bcae-e1c26671a0b4 already exists 00:23:13.785 [2024-06-10 10:48:36.583028] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:3ceebab3-2905-4b4b-bcae-e1c26671a0b4 alias for bdev NVMe1n1 00:23:13.785 [2024-06-10 10:48:36.583038] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:13.785 Running I/O for 1 seconds... 00:23:13.785 00:23:13.785 Latency(us) 00:23:13.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.785 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:13.785 NVMe0n1 : 1.01 20182.53 78.84 0.00 0.00 6325.05 4014.08 15400.96 00:23:13.785 =================================================================================================================== 00:23:13.785 Total : 20182.53 78.84 0.00 0.00 6325.05 4014.08 15400.96 00:23:13.785 Received shutdown signal, test time was about 1.000000 seconds 00:23:13.785 00:23:13.785 Latency(us) 00:23:13.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.785 =================================================================================================================== 00:23:13.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:13.785 10:48:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:13.785 rmmod nvme_tcp 00:23:13.785 rmmod nvme_fabrics 00:23:13.785 rmmod nvme_keyring 00:23:13.785 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:13.785 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:13.786 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:13.786 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 915707 ']' 00:23:13.786 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 915707 00:23:13.786 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 915707 ']' 00:23:13.786 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 915707 00:23:13.786 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:23:13.786 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:13.786 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 915707 00:23:14.046 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 915707' 00:23:14.047 killing process with pid 915707 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 915707 00:23:14.047 [2024-06-10 10:48:38.082154] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 915707 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:14.047 10:48:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.590 10:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.590 00:23:16.590 real 0m13.295s 00:23:16.590 user 0m15.653s 00:23:16.590 sys 0m6.123s 00:23:16.590 10:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:16.590 10:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.590 ************************************ 00:23:16.590 END TEST nvmf_multicontroller 00:23:16.590 ************************************ 00:23:16.590 10:48:40 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:16.590 10:48:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:16.590 10:48:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:16.590 10:48:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.590 ************************************ 00:23:16.590 START TEST nvmf_aer 00:23:16.590 ************************************ 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:16.590 * Looking for test storage... 00:23:16.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.590 10:48:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.732 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:24.733 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:24.733 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:24.733 Found net devices under 0000:31:00.0: cvl_0_0 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:24.733 Found net devices under 0000:31:00.1: cvl_0_1 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:23:24.733 00:23:24.733 --- 10.0.0.2 ping statistics --- 00:23:24.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.733 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:23:24.733 00:23:24.733 --- 10.0.0.1 ping statistics --- 00:23:24.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.733 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=920566 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 920566 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 920566 ']' 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.733 10:48:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:24.734 10:48:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.734 10:48:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:24.734 10:48:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.734 [2024-06-10 10:48:47.955256] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:23:24.734 [2024-06-10 10:48:47.955323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.734 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.734 [2024-06-10 10:48:48.027972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.734 [2024-06-10 10:48:48.103622] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.734 [2024-06-10 10:48:48.103662] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.734 [2024-06-10 10:48:48.103669] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.734 [2024-06-10 10:48:48.103675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.734 [2024-06-10 10:48:48.103681] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.734 [2024-06-10 10:48:48.103817] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.734 [2024-06-10 10:48:48.103933] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.734 [2024-06-10 10:48:48.104089] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.734 [2024-06-10 10:48:48.104090] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.734 [2024-06-10 10:48:48.776890] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.734 Malloc0 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.734 [2024-06-10 10:48:48.836076] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:24.734 [2024-06-10 10:48:48.836294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.734 [ 00:23:24.734 { 00:23:24.734 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:24.734 "subtype": "Discovery", 00:23:24.734 "listen_addresses": [], 00:23:24.734 "allow_any_host": true, 00:23:24.734 "hosts": [] 00:23:24.734 }, 00:23:24.734 { 00:23:24.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.734 "subtype": "NVMe", 00:23:24.734 "listen_addresses": [ 00:23:24.734 { 00:23:24.734 "trtype": "TCP", 00:23:24.734 "adrfam": "IPv4", 00:23:24.734 "traddr": "10.0.0.2", 00:23:24.734 "trsvcid": "4420" 00:23:24.734 } 00:23:24.734 ], 00:23:24.734 "allow_any_host": true, 00:23:24.734 "hosts": [], 00:23:24.734 "serial_number": "SPDK00000000000001", 00:23:24.734 "model_number": "SPDK bdev Controller", 00:23:24.734 "max_namespaces": 2, 00:23:24.734 "min_cntlid": 1, 00:23:24.734 "max_cntlid": 65519, 00:23:24.734 "namespaces": [ 00:23:24.734 { 00:23:24.734 "nsid": 1, 00:23:24.734 "bdev_name": "Malloc0", 00:23:24.734 "name": "Malloc0", 00:23:24.734 "nguid": "015BB5F2B46E46AD925754DF6D25B87C", 00:23:24.734 "uuid": "015bb5f2-b46e-46ad-9257-54df6d25b87c" 00:23:24.734 } 00:23:24.734 ] 00:23:24.734 } 00:23:24.734 ] 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=920837 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:24.734 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:23:24.734 10:48:48 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.994 Malloc1 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.994 Asynchronous Event Request test 00:23:24.994 Attaching to 10.0.0.2 00:23:24.994 Attached to 10.0.0.2 00:23:24.994 Registering asynchronous event callbacks... 00:23:24.994 Starting namespace attribute notice tests for all controllers... 00:23:24.994 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:24.994 aer_cb - Changed Namespace 00:23:24.994 Cleaning up... 00:23:24.994 [ 00:23:24.994 { 00:23:24.994 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:24.994 "subtype": "Discovery", 00:23:24.994 "listen_addresses": [], 00:23:24.994 "allow_any_host": true, 00:23:24.994 "hosts": [] 00:23:24.994 }, 00:23:24.994 { 00:23:24.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.994 "subtype": "NVMe", 00:23:24.994 "listen_addresses": [ 00:23:24.994 { 00:23:24.994 "trtype": "TCP", 00:23:24.994 "adrfam": "IPv4", 00:23:24.994 "traddr": "10.0.0.2", 00:23:24.994 "trsvcid": "4420" 00:23:24.994 } 00:23:24.994 ], 00:23:24.994 "allow_any_host": true, 00:23:24.994 "hosts": [], 00:23:24.994 "serial_number": "SPDK00000000000001", 00:23:24.994 "model_number": "SPDK bdev Controller", 00:23:24.994 "max_namespaces": 2, 00:23:24.994 "min_cntlid": 1, 00:23:24.994 "max_cntlid": 65519, 00:23:24.994 "namespaces": [ 00:23:24.994 { 00:23:24.994 "nsid": 1, 00:23:24.994 "bdev_name": "Malloc0", 00:23:24.994 "name": "Malloc0", 00:23:24.994 "nguid": "015BB5F2B46E46AD925754DF6D25B87C", 00:23:24.994 "uuid": "015bb5f2-b46e-46ad-9257-54df6d25b87c" 00:23:24.994 }, 00:23:24.994 { 00:23:24.994 "nsid": 2, 00:23:24.994 "bdev_name": "Malloc1", 00:23:24.994 "name": "Malloc1", 00:23:24.994 "nguid": "5EF43ADE5EBB42E293DCD741AB21881B", 00:23:24.994 "uuid": "5ef43ade-5ebb-42e2-93dc-d741ab21881b" 00:23:24.994 } 00:23:24.994 ] 00:23:24.994 } 00:23:24.994 ] 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 920837 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.994 rmmod nvme_tcp 00:23:24.994 rmmod nvme_fabrics 00:23:24.994 rmmod nvme_keyring 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 920566 ']' 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 920566 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 920566 ']' 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 920566 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:24.994 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 920566 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 920566' 00:23:25.254 killing process with pid 920566 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 920566 00:23:25.254 [2024-06-10 10:48:49.300200] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 920566 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.254 10:48:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.799 10:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.799 00:23:27.799 real 0m11.127s 00:23:27.799 user 0m7.504s 00:23:27.799 sys 0m5.852s 00:23:27.799 10:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:27.799 10:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:27.799 ************************************ 00:23:27.799 END TEST nvmf_aer 00:23:27.799 ************************************ 00:23:27.799 10:48:51 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:27.799 10:48:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:27.799 10:48:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:27.799 10:48:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:27.799 ************************************ 00:23:27.799 START TEST nvmf_async_init 00:23:27.799 ************************************ 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:27.799 * Looking for test storage... 00:23:27.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=20ca6b2478194e2cb1ca01a7ee6464bf 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.799 10:48:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:34.386 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:34.386 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.386 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:34.387 Found net devices under 0000:31:00.0: cvl_0_0 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:34.387 Found net devices under 0000:31:00.1: cvl_0_1 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.387 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:34.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:23:34.648 00:23:34.648 --- 10.0.0.2 ping statistics --- 00:23:34.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.648 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:23:34.648 00:23:34.648 --- 10.0.0.1 ping statistics --- 00:23:34.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.648 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=925213 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 925213 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 925213 ']' 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:34.648 10:48:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.909 10:48:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:34.909 10:48:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:34.909 [2024-06-10 10:48:58.983524] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:23:34.909 [2024-06-10 10:48:58.983572] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.909 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.909 [2024-06-10 10:48:59.049269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.909 [2024-06-10 10:48:59.113506] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.909 [2024-06-10 10:48:59.113544] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.909 [2024-06-10 10:48:59.113552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.909 [2024-06-10 10:48:59.113558] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.909 [2024-06-10 10:48:59.113564] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.909 [2024-06-10 10:48:59.113582] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.480 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:35.480 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:23:35.480 10:48:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.480 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:35.480 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.741 [2024-06-10 10:48:59.804350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.741 null0 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 20ca6b2478194e2cb1ca01a7ee6464bf 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:35.741 [2024-06-10 10:48:59.864442] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:35.741 [2024-06-10 10:48:59.864638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.741 10:48:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.002 nvme0n1 00:23:36.002 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.002 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:36.002 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.002 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.002 [ 00:23:36.002 { 00:23:36.002 "name": "nvme0n1", 00:23:36.002 "aliases": [ 00:23:36.002 "20ca6b24-7819-4e2c-b1ca-01a7ee6464bf" 00:23:36.002 ], 00:23:36.002 "product_name": "NVMe disk", 00:23:36.002 "block_size": 512, 00:23:36.002 "num_blocks": 2097152, 00:23:36.002 "uuid": "20ca6b24-7819-4e2c-b1ca-01a7ee6464bf", 00:23:36.002 "assigned_rate_limits": { 00:23:36.002 "rw_ios_per_sec": 0, 00:23:36.002 "rw_mbytes_per_sec": 0, 00:23:36.002 "r_mbytes_per_sec": 0, 00:23:36.002 "w_mbytes_per_sec": 0 00:23:36.002 }, 00:23:36.002 "claimed": false, 00:23:36.002 "zoned": false, 00:23:36.002 "supported_io_types": { 00:23:36.002 "read": true, 00:23:36.002 "write": true, 00:23:36.002 "unmap": false, 00:23:36.002 "write_zeroes": true, 00:23:36.002 "flush": true, 00:23:36.002 "reset": true, 00:23:36.002 "compare": true, 00:23:36.002 "compare_and_write": true, 00:23:36.002 "abort": true, 00:23:36.002 "nvme_admin": true, 00:23:36.002 "nvme_io": true 00:23:36.002 }, 00:23:36.002 "memory_domains": [ 00:23:36.002 { 00:23:36.002 "dma_device_id": "system", 00:23:36.002 "dma_device_type": 1 00:23:36.002 } 00:23:36.002 ], 00:23:36.002 "driver_specific": { 00:23:36.002 "nvme": [ 00:23:36.002 { 00:23:36.002 "trid": { 00:23:36.002 "trtype": "TCP", 00:23:36.002 "adrfam": "IPv4", 00:23:36.002 "traddr": "10.0.0.2", 00:23:36.002 "trsvcid": "4420", 00:23:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.002 }, 00:23:36.002 "ctrlr_data": { 00:23:36.002 "cntlid": 1, 00:23:36.002 "vendor_id": "0x8086", 00:23:36.002 "model_number": "SPDK bdev Controller", 00:23:36.002 "serial_number": "00000000000000000000", 00:23:36.002 "firmware_revision": "24.09", 00:23:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.002 "oacs": { 00:23:36.002 "security": 0, 00:23:36.002 "format": 0, 00:23:36.002 "firmware": 0, 00:23:36.002 "ns_manage": 0 00:23:36.002 }, 00:23:36.002 "multi_ctrlr": true, 00:23:36.002 "ana_reporting": false 00:23:36.002 }, 00:23:36.002 "vs": { 00:23:36.002 "nvme_version": "1.3" 00:23:36.002 }, 00:23:36.002 "ns_data": { 00:23:36.002 "id": 1, 00:23:36.002 "can_share": true 00:23:36.002 } 00:23:36.002 } 00:23:36.002 ], 00:23:36.002 "mp_policy": "active_passive" 00:23:36.002 } 00:23:36.002 } 00:23:36.002 ] 00:23:36.002 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.002 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:36.002 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.003 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 [2024-06-10 10:49:00.137660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:36.003 [2024-06-10 10:49:00.137729] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ff400 (9): Bad file descriptor 00:23:36.003 [2024-06-10 10:49:00.269339] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:36.003 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.003 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:36.003 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.003 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 [ 00:23:36.003 { 00:23:36.003 "name": "nvme0n1", 00:23:36.003 "aliases": [ 00:23:36.003 "20ca6b24-7819-4e2c-b1ca-01a7ee6464bf" 00:23:36.003 ], 00:23:36.003 "product_name": "NVMe disk", 00:23:36.003 "block_size": 512, 00:23:36.003 "num_blocks": 2097152, 00:23:36.003 "uuid": "20ca6b24-7819-4e2c-b1ca-01a7ee6464bf", 00:23:36.003 "assigned_rate_limits": { 00:23:36.003 "rw_ios_per_sec": 0, 00:23:36.003 "rw_mbytes_per_sec": 0, 00:23:36.003 "r_mbytes_per_sec": 0, 00:23:36.003 "w_mbytes_per_sec": 0 00:23:36.003 }, 00:23:36.003 "claimed": false, 00:23:36.003 "zoned": false, 00:23:36.003 "supported_io_types": { 00:23:36.003 "read": true, 00:23:36.003 "write": true, 00:23:36.003 "unmap": false, 00:23:36.003 "write_zeroes": true, 00:23:36.003 "flush": true, 00:23:36.003 "reset": true, 00:23:36.003 "compare": true, 00:23:36.003 "compare_and_write": true, 00:23:36.003 "abort": true, 00:23:36.003 "nvme_admin": true, 00:23:36.003 "nvme_io": true 00:23:36.003 }, 00:23:36.003 "memory_domains": [ 00:23:36.003 { 00:23:36.003 "dma_device_id": "system", 00:23:36.003 "dma_device_type": 1 00:23:36.003 } 00:23:36.003 ], 00:23:36.003 "driver_specific": { 00:23:36.003 "nvme": [ 00:23:36.003 { 00:23:36.003 "trid": { 00:23:36.003 "trtype": "TCP", 00:23:36.003 "adrfam": "IPv4", 00:23:36.003 "traddr": "10.0.0.2", 00:23:36.003 "trsvcid": "4420", 00:23:36.003 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.003 }, 00:23:36.003 "ctrlr_data": { 00:23:36.003 "cntlid": 2, 00:23:36.003 "vendor_id": "0x8086", 00:23:36.003 "model_number": "SPDK bdev Controller", 00:23:36.003 "serial_number": "00000000000000000000", 00:23:36.003 "firmware_revision": "24.09", 00:23:36.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.003 "oacs": { 00:23:36.003 "security": 0, 00:23:36.003 "format": 0, 00:23:36.003 "firmware": 0, 00:23:36.003 "ns_manage": 0 00:23:36.003 }, 00:23:36.003 "multi_ctrlr": true, 00:23:36.003 "ana_reporting": false 00:23:36.003 }, 00:23:36.003 "vs": { 00:23:36.003 "nvme_version": "1.3" 00:23:36.003 }, 00:23:36.003 "ns_data": { 00:23:36.003 "id": 1, 00:23:36.003 "can_share": true 00:23:36.003 } 00:23:36.003 } 00:23:36.003 ], 00:23:36.003 "mp_policy": "active_passive" 00:23:36.003 } 00:23:36.003 } 00:23:36.003 ] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xsbgI026oI 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xsbgI026oI 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.265 [2024-06-10 10:49:00.334297] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.265 [2024-06-10 10:49:00.334431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xsbgI026oI 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.265 [2024-06-10 10:49:00.346319] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xsbgI026oI 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.265 [2024-06-10 10:49:00.358354] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.265 [2024-06-10 10:49:00.358396] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:36.265 nvme0n1 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.265 [ 00:23:36.265 { 00:23:36.265 "name": "nvme0n1", 00:23:36.265 "aliases": [ 00:23:36.265 "20ca6b24-7819-4e2c-b1ca-01a7ee6464bf" 00:23:36.265 ], 00:23:36.265 "product_name": "NVMe disk", 00:23:36.265 "block_size": 512, 00:23:36.265 "num_blocks": 2097152, 00:23:36.265 "uuid": "20ca6b24-7819-4e2c-b1ca-01a7ee6464bf", 00:23:36.265 "assigned_rate_limits": { 00:23:36.265 "rw_ios_per_sec": 0, 00:23:36.265 "rw_mbytes_per_sec": 0, 00:23:36.265 "r_mbytes_per_sec": 0, 00:23:36.265 "w_mbytes_per_sec": 0 00:23:36.265 }, 00:23:36.265 "claimed": false, 00:23:36.265 "zoned": false, 00:23:36.265 "supported_io_types": { 00:23:36.265 "read": true, 00:23:36.265 "write": true, 00:23:36.265 "unmap": false, 00:23:36.265 "write_zeroes": true, 00:23:36.265 "flush": true, 00:23:36.265 "reset": true, 00:23:36.265 "compare": true, 00:23:36.265 "compare_and_write": true, 00:23:36.265 "abort": true, 00:23:36.265 "nvme_admin": true, 00:23:36.265 "nvme_io": true 00:23:36.265 }, 00:23:36.265 "memory_domains": [ 00:23:36.265 { 00:23:36.265 "dma_device_id": "system", 00:23:36.265 "dma_device_type": 1 00:23:36.265 } 00:23:36.265 ], 00:23:36.265 "driver_specific": { 00:23:36.265 "nvme": [ 00:23:36.265 { 00:23:36.265 "trid": { 00:23:36.265 "trtype": "TCP", 00:23:36.265 "adrfam": "IPv4", 00:23:36.265 "traddr": "10.0.0.2", 00:23:36.265 "trsvcid": "4421", 00:23:36.265 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:36.265 }, 00:23:36.265 "ctrlr_data": { 00:23:36.265 "cntlid": 3, 00:23:36.265 "vendor_id": "0x8086", 00:23:36.265 "model_number": "SPDK bdev Controller", 00:23:36.265 "serial_number": "00000000000000000000", 00:23:36.265 "firmware_revision": "24.09", 00:23:36.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.265 "oacs": { 00:23:36.265 "security": 0, 00:23:36.265 "format": 0, 00:23:36.265 "firmware": 0, 00:23:36.265 "ns_manage": 0 00:23:36.265 }, 00:23:36.265 "multi_ctrlr": true, 00:23:36.265 "ana_reporting": false 00:23:36.265 }, 00:23:36.265 "vs": { 00:23:36.265 "nvme_version": "1.3" 00:23:36.265 }, 00:23:36.265 "ns_data": { 00:23:36.265 "id": 1, 00:23:36.265 "can_share": true 00:23:36.265 } 00:23:36.265 } 00:23:36.265 ], 00:23:36.265 "mp_policy": "active_passive" 00:23:36.265 } 00:23:36.265 } 00:23:36.265 ] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.xsbgI026oI 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.265 rmmod nvme_tcp 00:23:36.265 rmmod nvme_fabrics 00:23:36.265 rmmod nvme_keyring 00:23:36.265 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 925213 ']' 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 925213 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 925213 ']' 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 925213 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 925213 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 925213' 00:23:36.526 killing process with pid 925213 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 925213 00:23:36.526 [2024-06-10 10:49:00.611701] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:36.526 [2024-06-10 10:49:00.611729] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:36.526 [2024-06-10 10:49:00.611737] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 925213 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.526 10:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.073 10:49:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.073 00:23:39.073 real 0m11.223s 00:23:39.073 user 0m4.074s 00:23:39.073 sys 0m5.584s 00:23:39.073 10:49:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:39.073 10:49:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.073 ************************************ 00:23:39.073 END TEST nvmf_async_init 00:23:39.073 ************************************ 00:23:39.073 10:49:02 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:39.073 10:49:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:39.073 10:49:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:39.073 10:49:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.073 ************************************ 00:23:39.073 START TEST dma 00:23:39.073 ************************************ 00:23:39.073 10:49:02 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:39.073 * Looking for test storage... 00:23:39.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.073 10:49:02 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.073 10:49:02 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.073 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.073 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.073 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.073 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.073 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.073 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.073 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.073 10:49:03 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.073 10:49:03 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.073 10:49:03 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.073 10:49:03 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.073 10:49:03 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.073 10:49:03 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.073 10:49:03 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:39.074 10:49:03 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.074 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:39.074 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.074 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.074 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.074 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.074 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.074 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.074 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.074 10:49:03 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.074 10:49:03 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:39.074 10:49:03 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:39.074 00:23:39.074 real 0m0.129s 00:23:39.074 user 0m0.064s 00:23:39.074 sys 0m0.073s 00:23:39.074 10:49:03 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:39.074 10:49:03 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:39.074 ************************************ 00:23:39.074 END TEST dma 00:23:39.074 ************************************ 00:23:39.074 10:49:03 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:39.074 10:49:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:39.074 10:49:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:39.074 10:49:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:39.074 ************************************ 00:23:39.074 START TEST nvmf_identify 00:23:39.074 ************************************ 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:39.074 * Looking for test storage... 00:23:39.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.074 10:49:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:47.213 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:47.214 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:47.214 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:47.214 Found net devices under 0000:31:00.0: cvl_0_0 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:47.214 Found net devices under 0000:31:00.1: cvl_0_1 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.214 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:47.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:23:47.215 00:23:47.215 --- 10.0.0.2 ping statistics --- 00:23:47.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.215 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:23:47.215 00:23:47.215 --- 10.0.0.1 ping statistics --- 00:23:47.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.215 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=929674 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 929674 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 929674 ']' 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.215 10:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:47.215 [2024-06-10 10:49:10.639266] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:23:47.215 [2024-06-10 10:49:10.639335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.215 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.215 [2024-06-10 10:49:10.713436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.215 [2024-06-10 10:49:10.792275] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.215 [2024-06-10 10:49:10.792315] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.215 [2024-06-10 10:49:10.792322] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.215 [2024-06-10 10:49:10.792329] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.215 [2024-06-10 10:49:10.792335] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.215 [2024-06-10 10:49:10.792472] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.215 [2024-06-10 10:49:10.792589] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.215 [2024-06-10 10:49:10.792745] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.215 [2024-06-10 10:49:10.792746] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.215 [2024-06-10 10:49:11.402543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.215 Malloc0 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.215 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.216 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.216 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.216 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:47.216 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.216 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.216 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.216 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.216 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.216 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.478 [2024-06-10 10:49:11.501808] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:47.478 [2024-06-10 10:49:11.502015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.478 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.478 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:47.478 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.478 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.478 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.478 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:47.478 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.478 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.478 [ 00:23:47.478 { 00:23:47.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:47.478 "subtype": "Discovery", 00:23:47.478 "listen_addresses": [ 00:23:47.478 { 00:23:47.478 "trtype": "TCP", 00:23:47.478 "adrfam": "IPv4", 00:23:47.478 "traddr": "10.0.0.2", 00:23:47.478 "trsvcid": "4420" 00:23:47.478 } 00:23:47.479 ], 00:23:47.479 "allow_any_host": true, 00:23:47.479 "hosts": [] 00:23:47.479 }, 00:23:47.479 { 00:23:47.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.479 "subtype": "NVMe", 00:23:47.479 "listen_addresses": [ 00:23:47.479 { 00:23:47.479 "trtype": "TCP", 00:23:47.479 "adrfam": "IPv4", 00:23:47.479 "traddr": "10.0.0.2", 00:23:47.479 "trsvcid": "4420" 00:23:47.479 } 00:23:47.479 ], 00:23:47.479 "allow_any_host": true, 00:23:47.479 "hosts": [], 00:23:47.479 "serial_number": "SPDK00000000000001", 00:23:47.479 "model_number": "SPDK bdev Controller", 00:23:47.479 "max_namespaces": 32, 00:23:47.479 "min_cntlid": 1, 00:23:47.479 "max_cntlid": 65519, 00:23:47.479 "namespaces": [ 00:23:47.479 { 00:23:47.479 "nsid": 1, 00:23:47.479 "bdev_name": "Malloc0", 00:23:47.479 "name": "Malloc0", 00:23:47.479 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:47.479 "eui64": "ABCDEF0123456789", 00:23:47.479 "uuid": "23e34902-0ac1-481b-a69a-8ded147b2cd8" 00:23:47.479 } 00:23:47.479 ] 00:23:47.479 } 00:23:47.479 ] 00:23:47.479 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.479 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:47.479 [2024-06-10 10:49:11.562265] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:23:47.479 [2024-06-10 10:49:11.562310] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930022 ] 00:23:47.479 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.479 [2024-06-10 10:49:11.593891] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:47.479 [2024-06-10 10:49:11.593937] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:47.479 [2024-06-10 10:49:11.593942] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:47.479 [2024-06-10 10:49:11.593954] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:47.479 [2024-06-10 10:49:11.593962] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:47.479 [2024-06-10 10:49:11.597281] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:47.479 [2024-06-10 10:49:11.597309] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe10ec0 0 00:23:47.479 [2024-06-10 10:49:11.605250] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:47.479 [2024-06-10 10:49:11.605262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:47.479 [2024-06-10 10:49:11.605266] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:47.479 [2024-06-10 10:49:11.605270] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:47.479 [2024-06-10 10:49:11.605307] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.605312] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.605317] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.479 [2024-06-10 10:49:11.605331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:47.479 [2024-06-10 10:49:11.605347] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.479 [2024-06-10 10:49:11.613254] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.479 [2024-06-10 10:49:11.613263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.479 [2024-06-10 10:49:11.613266] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.613271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95b10) on tqpair=0xe10ec0 00:23:47.479 [2024-06-10 10:49:11.613283] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:47.479 [2024-06-10 10:49:11.613290] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:47.479 [2024-06-10 10:49:11.613296] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:47.479 [2024-06-10 10:49:11.613309] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.613313] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.613316] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.479 [2024-06-10 10:49:11.613324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.479 [2024-06-10 10:49:11.613340] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.479 [2024-06-10 10:49:11.613547] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.479 [2024-06-10 10:49:11.613553] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.479 [2024-06-10 10:49:11.613557] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.613561] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95b10) on tqpair=0xe10ec0 00:23:47.479 [2024-06-10 10:49:11.613566] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:47.479 [2024-06-10 10:49:11.613573] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:47.479 [2024-06-10 10:49:11.613580] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.613584] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.613587] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.479 [2024-06-10 10:49:11.613594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.479 [2024-06-10 10:49:11.613604] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.479 [2024-06-10 10:49:11.613787] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.479 [2024-06-10 10:49:11.613794] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.479 [2024-06-10 10:49:11.613797] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.613801] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95b10) on tqpair=0xe10ec0 00:23:47.479 [2024-06-10 10:49:11.613806] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:47.479 [2024-06-10 10:49:11.613814] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:47.479 [2024-06-10 10:49:11.613820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.613824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.613827] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.479 [2024-06-10 10:49:11.613834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.479 [2024-06-10 10:49:11.613843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.479 [2024-06-10 10:49:11.614010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.479 [2024-06-10 10:49:11.614017] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.479 [2024-06-10 10:49:11.614020] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.614024] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95b10) on tqpair=0xe10ec0 00:23:47.479 [2024-06-10 10:49:11.614029] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:47.479 [2024-06-10 10:49:11.614038] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.614042] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.614045] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.479 [2024-06-10 10:49:11.614052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.479 [2024-06-10 10:49:11.614061] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.479 [2024-06-10 10:49:11.614234] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.479 [2024-06-10 10:49:11.614249] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.479 [2024-06-10 10:49:11.614253] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.479 [2024-06-10 10:49:11.614256] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95b10) on tqpair=0xe10ec0 00:23:47.479 [2024-06-10 10:49:11.614261] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:47.479 [2024-06-10 10:49:11.614266] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:47.480 [2024-06-10 10:49:11.614273] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:47.480 [2024-06-10 10:49:11.614379] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:47.480 [2024-06-10 10:49:11.614383] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:47.480 [2024-06-10 10:49:11.614392] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.614396] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.614399] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.480 [2024-06-10 10:49:11.614406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.480 [2024-06-10 10:49:11.614416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.480 [2024-06-10 10:49:11.614628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.480 [2024-06-10 10:49:11.614635] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.480 [2024-06-10 10:49:11.614638] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.614642] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95b10) on tqpair=0xe10ec0 00:23:47.480 [2024-06-10 10:49:11.614647] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:47.480 [2024-06-10 10:49:11.614655] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.614659] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.614663] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.480 [2024-06-10 10:49:11.614669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.480 [2024-06-10 10:49:11.614679] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.480 [2024-06-10 10:49:11.614854] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.480 [2024-06-10 10:49:11.614860] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.480 [2024-06-10 10:49:11.614864] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.614867] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95b10) on tqpair=0xe10ec0 00:23:47.480 [2024-06-10 10:49:11.614872] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:47.480 [2024-06-10 10:49:11.614876] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:47.480 [2024-06-10 10:49:11.614884] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:47.480 [2024-06-10 10:49:11.614892] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:47.480 [2024-06-10 10:49:11.614903] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.614906] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.480 [2024-06-10 10:49:11.614913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.480 [2024-06-10 10:49:11.614923] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.480 [2024-06-10 10:49:11.615184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.480 [2024-06-10 10:49:11.615190] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.480 [2024-06-10 10:49:11.615194] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615198] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe10ec0): datao=0, datal=4096, cccid=0 00:23:47.480 [2024-06-10 10:49:11.615203] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe95b10) on tqpair(0xe10ec0): expected_datao=0, payload_size=4096 00:23:47.480 [2024-06-10 10:49:11.615207] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615215] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615219] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615332] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.480 [2024-06-10 10:49:11.615338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.480 [2024-06-10 10:49:11.615342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615345] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95b10) on tqpair=0xe10ec0 00:23:47.480 [2024-06-10 10:49:11.615353] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:47.480 [2024-06-10 10:49:11.615357] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:47.480 [2024-06-10 10:49:11.615362] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:47.480 [2024-06-10 10:49:11.615369] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:47.480 [2024-06-10 10:49:11.615374] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:47.480 [2024-06-10 10:49:11.615379] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:47.480 [2024-06-10 10:49:11.615387] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:47.480 [2024-06-10 10:49:11.615394] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615398] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615401] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.480 [2024-06-10 10:49:11.615408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:47.480 [2024-06-10 10:49:11.615419] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.480 [2024-06-10 10:49:11.615639] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.480 [2024-06-10 10:49:11.615645] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.480 [2024-06-10 10:49:11.615649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615653] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95b10) on tqpair=0xe10ec0 00:23:47.480 [2024-06-10 10:49:11.615660] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe10ec0) 00:23:47.480 [2024-06-10 10:49:11.615675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.480 [2024-06-10 10:49:11.615682] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615685] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe10ec0) 00:23:47.480 [2024-06-10 10:49:11.615695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.480 [2024-06-10 10:49:11.615701] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615704] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615708] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe10ec0) 00:23:47.480 [2024-06-10 10:49:11.615713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.480 [2024-06-10 10:49:11.615719] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615723] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615726] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.480 [2024-06-10 10:49:11.615732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.480 [2024-06-10 10:49:11.615737] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:47.480 [2024-06-10 10:49:11.615747] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:47.480 [2024-06-10 10:49:11.615753] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.480 [2024-06-10 10:49:11.615757] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe10ec0) 00:23:47.480 [2024-06-10 10:49:11.615764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.480 [2024-06-10 10:49:11.615775] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95b10, cid 0, qid 0 00:23:47.480 [2024-06-10 10:49:11.615780] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95c70, cid 1, qid 0 00:23:47.480 [2024-06-10 10:49:11.615785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95dd0, cid 2, qid 0 00:23:47.480 [2024-06-10 10:49:11.615789] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.481 [2024-06-10 10:49:11.615794] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe96090, cid 4, qid 0 00:23:47.481 [2024-06-10 10:49:11.616029] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.481 [2024-06-10 10:49:11.616035] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.481 [2024-06-10 10:49:11.616038] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616042] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe96090) on tqpair=0xe10ec0 00:23:47.481 [2024-06-10 10:49:11.616047] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:47.481 [2024-06-10 10:49:11.616052] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:47.481 [2024-06-10 10:49:11.616062] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe10ec0) 00:23:47.481 [2024-06-10 10:49:11.616072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.481 [2024-06-10 10:49:11.616086] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe96090, cid 4, qid 0 00:23:47.481 [2024-06-10 10:49:11.616337] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.481 [2024-06-10 10:49:11.616344] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.481 [2024-06-10 10:49:11.616347] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616351] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe10ec0): datao=0, datal=4096, cccid=4 00:23:47.481 [2024-06-10 10:49:11.616356] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe96090) on tqpair(0xe10ec0): expected_datao=0, payload_size=4096 00:23:47.481 [2024-06-10 10:49:11.616360] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616367] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616370] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616506] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.481 [2024-06-10 10:49:11.616513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.481 [2024-06-10 10:49:11.616516] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616520] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe96090) on tqpair=0xe10ec0 00:23:47.481 [2024-06-10 10:49:11.616531] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:47.481 [2024-06-10 10:49:11.616552] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616556] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe10ec0) 00:23:47.481 [2024-06-10 10:49:11.616563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.481 [2024-06-10 10:49:11.616569] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616573] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe10ec0) 00:23:47.481 [2024-06-10 10:49:11.616583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.481 [2024-06-10 10:49:11.616596] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe96090, cid 4, qid 0 00:23:47.481 [2024-06-10 10:49:11.616601] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe961f0, cid 5, qid 0 00:23:47.481 [2024-06-10 10:49:11.616809] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.481 [2024-06-10 10:49:11.616815] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.481 [2024-06-10 10:49:11.616819] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616822] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe10ec0): datao=0, datal=1024, cccid=4 00:23:47.481 [2024-06-10 10:49:11.616827] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe96090) on tqpair(0xe10ec0): expected_datao=0, payload_size=1024 00:23:47.481 [2024-06-10 10:49:11.616831] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616837] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616841] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616847] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.481 [2024-06-10 10:49:11.616852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.481 [2024-06-10 10:49:11.616856] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.616859] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe961f0) on tqpair=0xe10ec0 00:23:47.481 [2024-06-10 10:49:11.657444] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.481 [2024-06-10 10:49:11.657461] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.481 [2024-06-10 10:49:11.657465] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.657469] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe96090) on tqpair=0xe10ec0 00:23:47.481 [2024-06-10 10:49:11.657483] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.657486] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe10ec0) 00:23:47.481 [2024-06-10 10:49:11.657494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.481 [2024-06-10 10:49:11.657510] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe96090, cid 4, qid 0 00:23:47.481 [2024-06-10 10:49:11.657690] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.481 [2024-06-10 10:49:11.657697] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.481 [2024-06-10 10:49:11.657700] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.657704] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe10ec0): datao=0, datal=3072, cccid=4 00:23:47.481 [2024-06-10 10:49:11.657708] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe96090) on tqpair(0xe10ec0): expected_datao=0, payload_size=3072 00:23:47.481 [2024-06-10 10:49:11.657712] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.657791] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.657795] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.698443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.481 [2024-06-10 10:49:11.698454] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.481 [2024-06-10 10:49:11.698458] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.698462] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe96090) on tqpair=0xe10ec0 00:23:47.481 [2024-06-10 10:49:11.698472] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.698476] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe10ec0) 00:23:47.481 [2024-06-10 10:49:11.698483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.481 [2024-06-10 10:49:11.698498] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe96090, cid 4, qid 0 00:23:47.481 [2024-06-10 10:49:11.698692] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.481 [2024-06-10 10:49:11.698698] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.481 [2024-06-10 10:49:11.698701] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.698705] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe10ec0): datao=0, datal=8, cccid=4 00:23:47.481 [2024-06-10 10:49:11.698709] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe96090) on tqpair(0xe10ec0): expected_datao=0, payload_size=8 00:23:47.481 [2024-06-10 10:49:11.698713] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.698720] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.698723] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.739442] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.481 [2024-06-10 10:49:11.739451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.481 [2024-06-10 10:49:11.739455] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.481 [2024-06-10 10:49:11.739459] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe96090) on tqpair=0xe10ec0 00:23:47.481 ===================================================== 00:23:47.481 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:47.481 ===================================================== 00:23:47.481 Controller Capabilities/Features 00:23:47.481 ================================ 00:23:47.481 Vendor ID: 0000 00:23:47.481 Subsystem Vendor ID: 0000 00:23:47.481 Serial Number: .................... 00:23:47.481 Model Number: ........................................ 00:23:47.481 Firmware Version: 24.09 00:23:47.481 Recommended Arb Burst: 0 00:23:47.481 IEEE OUI Identifier: 00 00 00 00:23:47.481 Multi-path I/O 00:23:47.481 May have multiple subsystem ports: No 00:23:47.481 May have multiple controllers: No 00:23:47.481 Associated with SR-IOV VF: No 00:23:47.481 Max Data Transfer Size: 131072 00:23:47.481 Max Number of Namespaces: 0 00:23:47.481 Max Number of I/O Queues: 1024 00:23:47.481 NVMe Specification Version (VS): 1.3 00:23:47.481 NVMe Specification Version (Identify): 1.3 00:23:47.481 Maximum Queue Entries: 128 00:23:47.481 Contiguous Queues Required: Yes 00:23:47.481 Arbitration Mechanisms Supported 00:23:47.482 Weighted Round Robin: Not Supported 00:23:47.482 Vendor Specific: Not Supported 00:23:47.482 Reset Timeout: 15000 ms 00:23:47.482 Doorbell Stride: 4 bytes 00:23:47.482 NVM Subsystem Reset: Not Supported 00:23:47.482 Command Sets Supported 00:23:47.482 NVM Command Set: Supported 00:23:47.482 Boot Partition: Not Supported 00:23:47.482 Memory Page Size Minimum: 4096 bytes 00:23:47.482 Memory Page Size Maximum: 4096 bytes 00:23:47.482 Persistent Memory Region: Not Supported 00:23:47.482 Optional Asynchronous Events Supported 00:23:47.482 Namespace Attribute Notices: Not Supported 00:23:47.482 Firmware Activation Notices: Not Supported 00:23:47.482 ANA Change Notices: Not Supported 00:23:47.482 PLE Aggregate Log Change Notices: Not Supported 00:23:47.482 LBA Status Info Alert Notices: Not Supported 00:23:47.482 EGE Aggregate Log Change Notices: Not Supported 00:23:47.482 Normal NVM Subsystem Shutdown event: Not Supported 00:23:47.482 Zone Descriptor Change Notices: Not Supported 00:23:47.482 Discovery Log Change Notices: Supported 00:23:47.482 Controller Attributes 00:23:47.482 128-bit Host Identifier: Not Supported 00:23:47.482 Non-Operational Permissive Mode: Not Supported 00:23:47.482 NVM Sets: Not Supported 00:23:47.482 Read Recovery Levels: Not Supported 00:23:47.482 Endurance Groups: Not Supported 00:23:47.482 Predictable Latency Mode: Not Supported 00:23:47.482 Traffic Based Keep ALive: Not Supported 00:23:47.482 Namespace Granularity: Not Supported 00:23:47.482 SQ Associations: Not Supported 00:23:47.482 UUID List: Not Supported 00:23:47.482 Multi-Domain Subsystem: Not Supported 00:23:47.482 Fixed Capacity Management: Not Supported 00:23:47.482 Variable Capacity Management: Not Supported 00:23:47.482 Delete Endurance Group: Not Supported 00:23:47.482 Delete NVM Set: Not Supported 00:23:47.482 Extended LBA Formats Supported: Not Supported 00:23:47.482 Flexible Data Placement Supported: Not Supported 00:23:47.482 00:23:47.482 Controller Memory Buffer Support 00:23:47.482 ================================ 00:23:47.482 Supported: No 00:23:47.482 00:23:47.482 Persistent Memory Region Support 00:23:47.482 ================================ 00:23:47.482 Supported: No 00:23:47.482 00:23:47.482 Admin Command Set Attributes 00:23:47.482 ============================ 00:23:47.482 Security Send/Receive: Not Supported 00:23:47.482 Format NVM: Not Supported 00:23:47.482 Firmware Activate/Download: Not Supported 00:23:47.482 Namespace Management: Not Supported 00:23:47.482 Device Self-Test: Not Supported 00:23:47.482 Directives: Not Supported 00:23:47.482 NVMe-MI: Not Supported 00:23:47.482 Virtualization Management: Not Supported 00:23:47.482 Doorbell Buffer Config: Not Supported 00:23:47.482 Get LBA Status Capability: Not Supported 00:23:47.482 Command & Feature Lockdown Capability: Not Supported 00:23:47.482 Abort Command Limit: 1 00:23:47.482 Async Event Request Limit: 4 00:23:47.482 Number of Firmware Slots: N/A 00:23:47.482 Firmware Slot 1 Read-Only: N/A 00:23:47.482 Firmware Activation Without Reset: N/A 00:23:47.482 Multiple Update Detection Support: N/A 00:23:47.482 Firmware Update Granularity: No Information Provided 00:23:47.482 Per-Namespace SMART Log: No 00:23:47.482 Asymmetric Namespace Access Log Page: Not Supported 00:23:47.482 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:47.482 Command Effects Log Page: Not Supported 00:23:47.482 Get Log Page Extended Data: Supported 00:23:47.482 Telemetry Log Pages: Not Supported 00:23:47.482 Persistent Event Log Pages: Not Supported 00:23:47.482 Supported Log Pages Log Page: May Support 00:23:47.482 Commands Supported & Effects Log Page: Not Supported 00:23:47.482 Feature Identifiers & Effects Log Page:May Support 00:23:47.482 NVMe-MI Commands & Effects Log Page: May Support 00:23:47.482 Data Area 4 for Telemetry Log: Not Supported 00:23:47.482 Error Log Page Entries Supported: 128 00:23:47.482 Keep Alive: Not Supported 00:23:47.482 00:23:47.482 NVM Command Set Attributes 00:23:47.482 ========================== 00:23:47.482 Submission Queue Entry Size 00:23:47.482 Max: 1 00:23:47.482 Min: 1 00:23:47.482 Completion Queue Entry Size 00:23:47.482 Max: 1 00:23:47.482 Min: 1 00:23:47.482 Number of Namespaces: 0 00:23:47.482 Compare Command: Not Supported 00:23:47.482 Write Uncorrectable Command: Not Supported 00:23:47.482 Dataset Management Command: Not Supported 00:23:47.482 Write Zeroes Command: Not Supported 00:23:47.482 Set Features Save Field: Not Supported 00:23:47.482 Reservations: Not Supported 00:23:47.482 Timestamp: Not Supported 00:23:47.482 Copy: Not Supported 00:23:47.482 Volatile Write Cache: Not Present 00:23:47.482 Atomic Write Unit (Normal): 1 00:23:47.482 Atomic Write Unit (PFail): 1 00:23:47.482 Atomic Compare & Write Unit: 1 00:23:47.482 Fused Compare & Write: Supported 00:23:47.482 Scatter-Gather List 00:23:47.482 SGL Command Set: Supported 00:23:47.482 SGL Keyed: Supported 00:23:47.482 SGL Bit Bucket Descriptor: Not Supported 00:23:47.482 SGL Metadata Pointer: Not Supported 00:23:47.482 Oversized SGL: Not Supported 00:23:47.482 SGL Metadata Address: Not Supported 00:23:47.482 SGL Offset: Supported 00:23:47.482 Transport SGL Data Block: Not Supported 00:23:47.482 Replay Protected Memory Block: Not Supported 00:23:47.482 00:23:47.482 Firmware Slot Information 00:23:47.482 ========================= 00:23:47.482 Active slot: 0 00:23:47.482 00:23:47.482 00:23:47.482 Error Log 00:23:47.482 ========= 00:23:47.482 00:23:47.482 Active Namespaces 00:23:47.482 ================= 00:23:47.482 Discovery Log Page 00:23:47.482 ================== 00:23:47.482 Generation Counter: 2 00:23:47.482 Number of Records: 2 00:23:47.482 Record Format: 0 00:23:47.482 00:23:47.482 Discovery Log Entry 0 00:23:47.482 ---------------------- 00:23:47.482 Transport Type: 3 (TCP) 00:23:47.482 Address Family: 1 (IPv4) 00:23:47.482 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:47.482 Entry Flags: 00:23:47.482 Duplicate Returned Information: 1 00:23:47.482 Explicit Persistent Connection Support for Discovery: 1 00:23:47.482 Transport Requirements: 00:23:47.482 Secure Channel: Not Required 00:23:47.482 Port ID: 0 (0x0000) 00:23:47.482 Controller ID: 65535 (0xffff) 00:23:47.482 Admin Max SQ Size: 128 00:23:47.482 Transport Service Identifier: 4420 00:23:47.482 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:47.482 Transport Address: 10.0.0.2 00:23:47.482 Discovery Log Entry 1 00:23:47.482 ---------------------- 00:23:47.482 Transport Type: 3 (TCP) 00:23:47.482 Address Family: 1 (IPv4) 00:23:47.482 Subsystem Type: 2 (NVM Subsystem) 00:23:47.482 Entry Flags: 00:23:47.482 Duplicate Returned Information: 0 00:23:47.482 Explicit Persistent Connection Support for Discovery: 0 00:23:47.482 Transport Requirements: 00:23:47.482 Secure Channel: Not Required 00:23:47.482 Port ID: 0 (0x0000) 00:23:47.482 Controller ID: 65535 (0xffff) 00:23:47.483 Admin Max SQ Size: 128 00:23:47.483 Transport Service Identifier: 4420 00:23:47.483 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:47.483 Transport Address: 10.0.0.2 [2024-06-10 10:49:11.739544] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:47.483 [2024-06-10 10:49:11.739558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.483 [2024-06-10 10:49:11.739565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.483 [2024-06-10 10:49:11.739571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.483 [2024-06-10 10:49:11.739578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.483 [2024-06-10 10:49:11.739587] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.739591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.739595] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.739602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.739616] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.739707] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.739713] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.739717] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.739721] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.739731] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.739735] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.739739] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.739745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.739758] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.739935] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.739941] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.739945] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.739948] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.739953] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:47.483 [2024-06-10 10:49:11.739958] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:47.483 [2024-06-10 10:49:11.739967] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.739971] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.739974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.739981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.739990] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.740165] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.740171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.740174] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.740188] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740192] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740197] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.740204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.740213] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.740426] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.740433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.740437] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740441] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.740450] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740454] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740457] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.740464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.740474] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.740742] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.740748] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.740751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740755] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.740765] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740769] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740772] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.740778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.740788] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.740986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.740992] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.740996] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.740999] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.741008] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741016] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.741022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.741032] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.741229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.741236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.741239] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741246] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.741256] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741260] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.741272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.741281] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.741500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.741506] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.741510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741513] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.741523] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741527] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.741537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.741546] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.741720] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.741726] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.741729] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741733] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.741742] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741749] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.483 [2024-06-10 10:49:11.741756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.483 [2024-06-10 10:49:11.741765] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.483 [2024-06-10 10:49:11.741939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.483 [2024-06-10 10:49:11.741946] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.483 [2024-06-10 10:49:11.741949] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741953] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.483 [2024-06-10 10:49:11.741962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741966] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.483 [2024-06-10 10:49:11.741969] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.484 [2024-06-10 10:49:11.741976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.484 [2024-06-10 10:49:11.741985] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.484 [2024-06-10 10:49:11.742177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.484 [2024-06-10 10:49:11.742183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.484 [2024-06-10 10:49:11.742186] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.484 [2024-06-10 10:49:11.742190] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.484 [2024-06-10 10:49:11.742199] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.484 [2024-06-10 10:49:11.742203] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.484 [2024-06-10 10:49:11.742206] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe10ec0) 00:23:47.484 [2024-06-10 10:49:11.742213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.484 [2024-06-10 10:49:11.742226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe95f30, cid 3, qid 0 00:23:47.484 [2024-06-10 10:49:11.746251] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.484 [2024-06-10 10:49:11.746259] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.484 [2024-06-10 10:49:11.746263] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.484 [2024-06-10 10:49:11.746266] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe95f30) on tqpair=0xe10ec0 00:23:47.484 [2024-06-10 10:49:11.746274] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:47.484 00:23:47.484 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:47.750 [2024-06-10 10:49:11.783395] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:23:47.750 [2024-06-10 10:49:11.783436] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930024 ] 00:23:47.750 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.750 [2024-06-10 10:49:11.815777] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:47.750 [2024-06-10 10:49:11.815825] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:47.750 [2024-06-10 10:49:11.815830] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:47.750 [2024-06-10 10:49:11.815844] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:47.750 [2024-06-10 10:49:11.815852] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:47.750 [2024-06-10 10:49:11.819276] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:47.750 [2024-06-10 10:49:11.819303] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1cdcec0 0 00:23:47.750 [2024-06-10 10:49:11.827250] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:47.750 [2024-06-10 10:49:11.827260] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:47.750 [2024-06-10 10:49:11.827264] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:47.750 [2024-06-10 10:49:11.827267] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:47.750 [2024-06-10 10:49:11.827300] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.827305] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.827309] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.750 [2024-06-10 10:49:11.827320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:47.750 [2024-06-10 10:49:11.827336] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.750 [2024-06-10 10:49:11.835252] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.750 [2024-06-10 10:49:11.835260] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.750 [2024-06-10 10:49:11.835264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.835268] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61b10) on tqpair=0x1cdcec0 00:23:47.750 [2024-06-10 10:49:11.835278] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:47.750 [2024-06-10 10:49:11.835284] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:47.750 [2024-06-10 10:49:11.835293] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:47.750 [2024-06-10 10:49:11.835304] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.835308] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.835311] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.750 [2024-06-10 10:49:11.835319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.750 [2024-06-10 10:49:11.835331] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.750 [2024-06-10 10:49:11.835510] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.750 [2024-06-10 10:49:11.835517] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.750 [2024-06-10 10:49:11.835520] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.835524] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61b10) on tqpair=0x1cdcec0 00:23:47.750 [2024-06-10 10:49:11.835530] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:47.750 [2024-06-10 10:49:11.835537] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:47.750 [2024-06-10 10:49:11.835544] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.835547] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.835551] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.750 [2024-06-10 10:49:11.835557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.750 [2024-06-10 10:49:11.835567] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.750 [2024-06-10 10:49:11.835754] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.750 [2024-06-10 10:49:11.835761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.750 [2024-06-10 10:49:11.835764] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.835768] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61b10) on tqpair=0x1cdcec0 00:23:47.750 [2024-06-10 10:49:11.835773] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:47.750 [2024-06-10 10:49:11.835781] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:47.750 [2024-06-10 10:49:11.835787] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.835791] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.750 [2024-06-10 10:49:11.835794] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.750 [2024-06-10 10:49:11.835801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.750 [2024-06-10 10:49:11.835811] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.750 [2024-06-10 10:49:11.835970] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.750 [2024-06-10 10:49:11.835977] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.750 [2024-06-10 10:49:11.835980] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.835984] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61b10) on tqpair=0x1cdcec0 00:23:47.751 [2024-06-10 10:49:11.835989] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:47.751 [2024-06-10 10:49:11.836001] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836004] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836008] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.751 [2024-06-10 10:49:11.836015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.751 [2024-06-10 10:49:11.836024] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.751 [2024-06-10 10:49:11.836192] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.751 [2024-06-10 10:49:11.836199] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.751 [2024-06-10 10:49:11.836202] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836205] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61b10) on tqpair=0x1cdcec0 00:23:47.751 [2024-06-10 10:49:11.836211] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:47.751 [2024-06-10 10:49:11.836215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:47.751 [2024-06-10 10:49:11.836222] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:47.751 [2024-06-10 10:49:11.836328] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:47.751 [2024-06-10 10:49:11.836332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:47.751 [2024-06-10 10:49:11.836339] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836343] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836346] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.751 [2024-06-10 10:49:11.836353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.751 [2024-06-10 10:49:11.836363] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.751 [2024-06-10 10:49:11.836564] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.751 [2024-06-10 10:49:11.836571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.751 [2024-06-10 10:49:11.836574] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836577] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61b10) on tqpair=0x1cdcec0 00:23:47.751 [2024-06-10 10:49:11.836583] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:47.751 [2024-06-10 10:49:11.836592] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836595] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836599] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.751 [2024-06-10 10:49:11.836605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.751 [2024-06-10 10:49:11.836615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.751 [2024-06-10 10:49:11.836810] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.751 [2024-06-10 10:49:11.836816] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.751 [2024-06-10 10:49:11.836819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836823] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61b10) on tqpair=0x1cdcec0 00:23:47.751 [2024-06-10 10:49:11.836828] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:47.751 [2024-06-10 10:49:11.836835] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:47.751 [2024-06-10 10:49:11.836842] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:47.751 [2024-06-10 10:49:11.836850] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:47.751 [2024-06-10 10:49:11.836858] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.836862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.751 [2024-06-10 10:49:11.836868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.751 [2024-06-10 10:49:11.836878] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.751 [2024-06-10 10:49:11.837073] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.751 [2024-06-10 10:49:11.837080] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.751 [2024-06-10 10:49:11.837083] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.837087] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cdcec0): datao=0, datal=4096, cccid=0 00:23:47.751 [2024-06-10 10:49:11.837092] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d61b10) on tqpair(0x1cdcec0): expected_datao=0, payload_size=4096 00:23:47.751 [2024-06-10 10:49:11.837096] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.837103] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.837107] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.837258] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.751 [2024-06-10 10:49:11.837264] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.751 [2024-06-10 10:49:11.837268] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.751 [2024-06-10 10:49:11.837271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61b10) on tqpair=0x1cdcec0 00:23:47.751 [2024-06-10 10:49:11.837279] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:47.751 [2024-06-10 10:49:11.837284] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:47.751 [2024-06-10 10:49:11.837288] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:47.751 [2024-06-10 10:49:11.837294] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:47.751 [2024-06-10 10:49:11.837299] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:47.751 [2024-06-10 10:49:11.837304] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.837312] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.837318] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837322] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837325] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.752 [2024-06-10 10:49:11.837332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:47.752 [2024-06-10 10:49:11.837343] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.752 [2024-06-10 10:49:11.837504] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.752 [2024-06-10 10:49:11.837512] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.752 [2024-06-10 10:49:11.837515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837519] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61b10) on tqpair=0x1cdcec0 00:23:47.752 [2024-06-10 10:49:11.837526] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837530] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837534] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cdcec0) 00:23:47.752 [2024-06-10 10:49:11.837540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.752 [2024-06-10 10:49:11.837546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837549] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837553] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1cdcec0) 00:23:47.752 [2024-06-10 10:49:11.837558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.752 [2024-06-10 10:49:11.837564] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837568] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837571] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1cdcec0) 00:23:47.752 [2024-06-10 10:49:11.837577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.752 [2024-06-10 10:49:11.837583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837586] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837590] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cdcec0) 00:23:47.752 [2024-06-10 10:49:11.837595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.752 [2024-06-10 10:49:11.837600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.837610] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.837617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837620] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cdcec0) 00:23:47.752 [2024-06-10 10:49:11.837627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.752 [2024-06-10 10:49:11.837638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61b10, cid 0, qid 0 00:23:47.752 [2024-06-10 10:49:11.837643] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61c70, cid 1, qid 0 00:23:47.752 [2024-06-10 10:49:11.837648] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61dd0, cid 2, qid 0 00:23:47.752 [2024-06-10 10:49:11.837652] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61f30, cid 3, qid 0 00:23:47.752 [2024-06-10 10:49:11.837657] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d62090, cid 4, qid 0 00:23:47.752 [2024-06-10 10:49:11.837885] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.752 [2024-06-10 10:49:11.837891] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.752 [2024-06-10 10:49:11.837895] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837898] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d62090) on tqpair=0x1cdcec0 00:23:47.752 [2024-06-10 10:49:11.837904] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:47.752 [2024-06-10 10:49:11.837912] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.837920] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.837926] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.837932] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837936] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.837940] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cdcec0) 00:23:47.752 [2024-06-10 10:49:11.837946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:47.752 [2024-06-10 10:49:11.837956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d62090, cid 4, qid 0 00:23:47.752 [2024-06-10 10:49:11.838115] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.752 [2024-06-10 10:49:11.838122] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.752 [2024-06-10 10:49:11.838125] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.838129] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d62090) on tqpair=0x1cdcec0 00:23:47.752 [2024-06-10 10:49:11.838182] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.838191] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.838198] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.838201] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cdcec0) 00:23:47.752 [2024-06-10 10:49:11.838208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.752 [2024-06-10 10:49:11.838217] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d62090, cid 4, qid 0 00:23:47.752 [2024-06-10 10:49:11.838397] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.752 [2024-06-10 10:49:11.838404] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.752 [2024-06-10 10:49:11.838407] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.838411] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cdcec0): datao=0, datal=4096, cccid=4 00:23:47.752 [2024-06-10 10:49:11.838415] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d62090) on tqpair(0x1cdcec0): expected_datao=0, payload_size=4096 00:23:47.752 [2024-06-10 10:49:11.838419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.838494] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.838498] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.838660] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.752 [2024-06-10 10:49:11.838667] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.752 [2024-06-10 10:49:11.838670] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.838674] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d62090) on tqpair=0x1cdcec0 00:23:47.752 [2024-06-10 10:49:11.838683] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:47.752 [2024-06-10 10:49:11.838696] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.838705] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.838722] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.838726] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cdcec0) 00:23:47.752 [2024-06-10 10:49:11.838732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.752 [2024-06-10 10:49:11.838743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d62090, cid 4, qid 0 00:23:47.752 [2024-06-10 10:49:11.838954] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.752 [2024-06-10 10:49:11.838960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.752 [2024-06-10 10:49:11.838964] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.838967] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cdcec0): datao=0, datal=4096, cccid=4 00:23:47.752 [2024-06-10 10:49:11.838972] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d62090) on tqpair(0x1cdcec0): expected_datao=0, payload_size=4096 00:23:47.752 [2024-06-10 10:49:11.838976] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.839008] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.839012] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.839164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.752 [2024-06-10 10:49:11.839171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.752 [2024-06-10 10:49:11.839174] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.752 [2024-06-10 10:49:11.839178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d62090) on tqpair=0x1cdcec0 00:23:47.752 [2024-06-10 10:49:11.839191] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.839201] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:47.752 [2024-06-10 10:49:11.839208] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.839212] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.839218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.753 [2024-06-10 10:49:11.839228] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d62090, cid 4, qid 0 00:23:47.753 [2024-06-10 10:49:11.843252] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.753 [2024-06-10 10:49:11.843260] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.753 [2024-06-10 10:49:11.843264] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843267] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cdcec0): datao=0, datal=4096, cccid=4 00:23:47.753 [2024-06-10 10:49:11.843271] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d62090) on tqpair(0x1cdcec0): expected_datao=0, payload_size=4096 00:23:47.753 [2024-06-10 10:49:11.843276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843282] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843286] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843291] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.753 [2024-06-10 10:49:11.843297] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.753 [2024-06-10 10:49:11.843300] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843304] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d62090) on tqpair=0x1cdcec0 00:23:47.753 [2024-06-10 10:49:11.843312] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:47.753 [2024-06-10 10:49:11.843323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:47.753 [2024-06-10 10:49:11.843331] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:47.753 [2024-06-10 10:49:11.843337] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:47.753 [2024-06-10 10:49:11.843343] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:47.753 [2024-06-10 10:49:11.843348] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:47.753 [2024-06-10 10:49:11.843352] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:47.753 [2024-06-10 10:49:11.843357] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:47.753 [2024-06-10 10:49:11.843372] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843377] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.843383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.753 [2024-06-10 10:49:11.843390] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843393] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.843403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.753 [2024-06-10 10:49:11.843416] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d62090, cid 4, qid 0 00:23:47.753 [2024-06-10 10:49:11.843421] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d621f0, cid 5, qid 0 00:23:47.753 [2024-06-10 10:49:11.843622] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.753 [2024-06-10 10:49:11.843629] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.753 [2024-06-10 10:49:11.843632] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843636] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d62090) on tqpair=0x1cdcec0 00:23:47.753 [2024-06-10 10:49:11.843643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.753 [2024-06-10 10:49:11.843649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.753 [2024-06-10 10:49:11.843652] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d621f0) on tqpair=0x1cdcec0 00:23:47.753 [2024-06-10 10:49:11.843665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.843675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.753 [2024-06-10 10:49:11.843684] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d621f0, cid 5, qid 0 00:23:47.753 [2024-06-10 10:49:11.843868] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.753 [2024-06-10 10:49:11.843874] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.753 [2024-06-10 10:49:11.843878] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843881] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d621f0) on tqpair=0x1cdcec0 00:23:47.753 [2024-06-10 10:49:11.843893] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.843896] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.843903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.753 [2024-06-10 10:49:11.843912] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d621f0, cid 5, qid 0 00:23:47.753 [2024-06-10 10:49:11.844099] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.753 [2024-06-10 10:49:11.844105] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.753 [2024-06-10 10:49:11.844108] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.844112] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d621f0) on tqpair=0x1cdcec0 00:23:47.753 [2024-06-10 10:49:11.844122] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.844125] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.844131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.753 [2024-06-10 10:49:11.844140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d621f0, cid 5, qid 0 00:23:47.753 [2024-06-10 10:49:11.844370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.753 [2024-06-10 10:49:11.844376] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.753 [2024-06-10 10:49:11.844380] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.844383] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d621f0) on tqpair=0x1cdcec0 00:23:47.753 [2024-06-10 10:49:11.844395] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.844399] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.844405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.753 [2024-06-10 10:49:11.844412] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.844416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.844422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.753 [2024-06-10 10:49:11.844429] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.844432] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.844439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.753 [2024-06-10 10:49:11.844446] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.753 [2024-06-10 10:49:11.844450] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cdcec0) 00:23:47.753 [2024-06-10 10:49:11.844456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.753 [2024-06-10 10:49:11.844466] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d621f0, cid 5, qid 0 00:23:47.754 [2024-06-10 10:49:11.844472] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d62090, cid 4, qid 0 00:23:47.754 [2024-06-10 10:49:11.844476] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d62350, cid 6, qid 0 00:23:47.754 [2024-06-10 10:49:11.844481] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d624b0, cid 7, qid 0 00:23:47.754 [2024-06-10 10:49:11.844819] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.754 [2024-06-10 10:49:11.844827] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.754 [2024-06-10 10:49:11.844831] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.844834] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cdcec0): datao=0, datal=8192, cccid=5 00:23:47.754 [2024-06-10 10:49:11.844838] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d621f0) on tqpair(0x1cdcec0): expected_datao=0, payload_size=8192 00:23:47.754 [2024-06-10 10:49:11.844842] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.844942] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.844946] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.844952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.754 [2024-06-10 10:49:11.844957] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.754 [2024-06-10 10:49:11.844961] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.844964] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cdcec0): datao=0, datal=512, cccid=4 00:23:47.754 [2024-06-10 10:49:11.844969] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d62090) on tqpair(0x1cdcec0): expected_datao=0, payload_size=512 00:23:47.754 [2024-06-10 10:49:11.844973] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.844979] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.844982] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.844988] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.754 [2024-06-10 10:49:11.844993] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.754 [2024-06-10 10:49:11.844997] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845000] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cdcec0): datao=0, datal=512, cccid=6 00:23:47.754 [2024-06-10 10:49:11.845004] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d62350) on tqpair(0x1cdcec0): expected_datao=0, payload_size=512 00:23:47.754 [2024-06-10 10:49:11.845008] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845015] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845018] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:47.754 [2024-06-10 10:49:11.845029] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:47.754 [2024-06-10 10:49:11.845032] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845036] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cdcec0): datao=0, datal=4096, cccid=7 00:23:47.754 [2024-06-10 10:49:11.845040] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d624b0) on tqpair(0x1cdcec0): expected_datao=0, payload_size=4096 00:23:47.754 [2024-06-10 10:49:11.845044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845054] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845058] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845319] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.754 [2024-06-10 10:49:11.845325] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.754 [2024-06-10 10:49:11.845328] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845332] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d621f0) on tqpair=0x1cdcec0 00:23:47.754 [2024-06-10 10:49:11.845345] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.754 [2024-06-10 10:49:11.845351] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.754 [2024-06-10 10:49:11.845354] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845358] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d62090) on tqpair=0x1cdcec0 00:23:47.754 [2024-06-10 10:49:11.845368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.754 [2024-06-10 10:49:11.845374] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.754 [2024-06-10 10:49:11.845377] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845381] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d62350) on tqpair=0x1cdcec0 00:23:47.754 [2024-06-10 10:49:11.845392] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.754 [2024-06-10 10:49:11.845398] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.754 [2024-06-10 10:49:11.845401] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.754 [2024-06-10 10:49:11.845405] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d624b0) on tqpair=0x1cdcec0 00:23:47.754 ===================================================== 00:23:47.754 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.754 ===================================================== 00:23:47.754 Controller Capabilities/Features 00:23:47.754 ================================ 00:23:47.754 Vendor ID: 8086 00:23:47.754 Subsystem Vendor ID: 8086 00:23:47.754 Serial Number: SPDK00000000000001 00:23:47.754 Model Number: SPDK bdev Controller 00:23:47.754 Firmware Version: 24.09 00:23:47.754 Recommended Arb Burst: 6 00:23:47.754 IEEE OUI Identifier: e4 d2 5c 00:23:47.754 Multi-path I/O 00:23:47.754 May have multiple subsystem ports: Yes 00:23:47.754 May have multiple controllers: Yes 00:23:47.754 Associated with SR-IOV VF: No 00:23:47.754 Max Data Transfer Size: 131072 00:23:47.754 Max Number of Namespaces: 32 00:23:47.754 Max Number of I/O Queues: 127 00:23:47.754 NVMe Specification Version (VS): 1.3 00:23:47.754 NVMe Specification Version (Identify): 1.3 00:23:47.754 Maximum Queue Entries: 128 00:23:47.754 Contiguous Queues Required: Yes 00:23:47.754 Arbitration Mechanisms Supported 00:23:47.754 Weighted Round Robin: Not Supported 00:23:47.754 Vendor Specific: Not Supported 00:23:47.754 Reset Timeout: 15000 ms 00:23:47.754 Doorbell Stride: 4 bytes 00:23:47.754 NVM Subsystem Reset: Not Supported 00:23:47.754 Command Sets Supported 00:23:47.754 NVM Command Set: Supported 00:23:47.754 Boot Partition: Not Supported 00:23:47.754 Memory Page Size Minimum: 4096 bytes 00:23:47.754 Memory Page Size Maximum: 4096 bytes 00:23:47.754 Persistent Memory Region: Not Supported 00:23:47.754 Optional Asynchronous Events Supported 00:23:47.754 Namespace Attribute Notices: Supported 00:23:47.754 Firmware Activation Notices: Not Supported 00:23:47.754 ANA Change Notices: Not Supported 00:23:47.754 PLE Aggregate Log Change Notices: Not Supported 00:23:47.754 LBA Status Info Alert Notices: Not Supported 00:23:47.754 EGE Aggregate Log Change Notices: Not Supported 00:23:47.754 Normal NVM Subsystem Shutdown event: Not Supported 00:23:47.754 Zone Descriptor Change Notices: Not Supported 00:23:47.754 Discovery Log Change Notices: Not Supported 00:23:47.754 Controller Attributes 00:23:47.754 128-bit Host Identifier: Supported 00:23:47.754 Non-Operational Permissive Mode: Not Supported 00:23:47.754 NVM Sets: Not Supported 00:23:47.754 Read Recovery Levels: Not Supported 00:23:47.754 Endurance Groups: Not Supported 00:23:47.754 Predictable Latency Mode: Not Supported 00:23:47.754 Traffic Based Keep ALive: Not Supported 00:23:47.754 Namespace Granularity: Not Supported 00:23:47.754 SQ Associations: Not Supported 00:23:47.754 UUID List: Not Supported 00:23:47.754 Multi-Domain Subsystem: Not Supported 00:23:47.754 Fixed Capacity Management: Not Supported 00:23:47.754 Variable Capacity Management: Not Supported 00:23:47.755 Delete Endurance Group: Not Supported 00:23:47.755 Delete NVM Set: Not Supported 00:23:47.755 Extended LBA Formats Supported: Not Supported 00:23:47.755 Flexible Data Placement Supported: Not Supported 00:23:47.755 00:23:47.755 Controller Memory Buffer Support 00:23:47.755 ================================ 00:23:47.755 Supported: No 00:23:47.755 00:23:47.755 Persistent Memory Region Support 00:23:47.755 ================================ 00:23:47.755 Supported: No 00:23:47.755 00:23:47.755 Admin Command Set Attributes 00:23:47.755 ============================ 00:23:47.755 Security Send/Receive: Not Supported 00:23:47.755 Format NVM: Not Supported 00:23:47.755 Firmware Activate/Download: Not Supported 00:23:47.755 Namespace Management: Not Supported 00:23:47.755 Device Self-Test: Not Supported 00:23:47.755 Directives: Not Supported 00:23:47.755 NVMe-MI: Not Supported 00:23:47.755 Virtualization Management: Not Supported 00:23:47.755 Doorbell Buffer Config: Not Supported 00:23:47.755 Get LBA Status Capability: Not Supported 00:23:47.755 Command & Feature Lockdown Capability: Not Supported 00:23:47.755 Abort Command Limit: 4 00:23:47.755 Async Event Request Limit: 4 00:23:47.755 Number of Firmware Slots: N/A 00:23:47.755 Firmware Slot 1 Read-Only: N/A 00:23:47.755 Firmware Activation Without Reset: N/A 00:23:47.755 Multiple Update Detection Support: N/A 00:23:47.755 Firmware Update Granularity: No Information Provided 00:23:47.755 Per-Namespace SMART Log: No 00:23:47.755 Asymmetric Namespace Access Log Page: Not Supported 00:23:47.755 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:47.755 Command Effects Log Page: Supported 00:23:47.755 Get Log Page Extended Data: Supported 00:23:47.755 Telemetry Log Pages: Not Supported 00:23:47.755 Persistent Event Log Pages: Not Supported 00:23:47.755 Supported Log Pages Log Page: May Support 00:23:47.755 Commands Supported & Effects Log Page: Not Supported 00:23:47.755 Feature Identifiers & Effects Log Page:May Support 00:23:47.755 NVMe-MI Commands & Effects Log Page: May Support 00:23:47.755 Data Area 4 for Telemetry Log: Not Supported 00:23:47.755 Error Log Page Entries Supported: 128 00:23:47.755 Keep Alive: Supported 00:23:47.755 Keep Alive Granularity: 10000 ms 00:23:47.755 00:23:47.755 NVM Command Set Attributes 00:23:47.755 ========================== 00:23:47.755 Submission Queue Entry Size 00:23:47.755 Max: 64 00:23:47.755 Min: 64 00:23:47.755 Completion Queue Entry Size 00:23:47.755 Max: 16 00:23:47.755 Min: 16 00:23:47.755 Number of Namespaces: 32 00:23:47.755 Compare Command: Supported 00:23:47.755 Write Uncorrectable Command: Not Supported 00:23:47.755 Dataset Management Command: Supported 00:23:47.755 Write Zeroes Command: Supported 00:23:47.755 Set Features Save Field: Not Supported 00:23:47.755 Reservations: Supported 00:23:47.755 Timestamp: Not Supported 00:23:47.755 Copy: Supported 00:23:47.755 Volatile Write Cache: Present 00:23:47.755 Atomic Write Unit (Normal): 1 00:23:47.755 Atomic Write Unit (PFail): 1 00:23:47.755 Atomic Compare & Write Unit: 1 00:23:47.755 Fused Compare & Write: Supported 00:23:47.755 Scatter-Gather List 00:23:47.755 SGL Command Set: Supported 00:23:47.755 SGL Keyed: Supported 00:23:47.755 SGL Bit Bucket Descriptor: Not Supported 00:23:47.755 SGL Metadata Pointer: Not Supported 00:23:47.755 Oversized SGL: Not Supported 00:23:47.755 SGL Metadata Address: Not Supported 00:23:47.755 SGL Offset: Supported 00:23:47.755 Transport SGL Data Block: Not Supported 00:23:47.755 Replay Protected Memory Block: Not Supported 00:23:47.755 00:23:47.755 Firmware Slot Information 00:23:47.755 ========================= 00:23:47.755 Active slot: 1 00:23:47.755 Slot 1 Firmware Revision: 24.09 00:23:47.755 00:23:47.755 00:23:47.755 Commands Supported and Effects 00:23:47.755 ============================== 00:23:47.755 Admin Commands 00:23:47.755 -------------- 00:23:47.755 Get Log Page (02h): Supported 00:23:47.755 Identify (06h): Supported 00:23:47.755 Abort (08h): Supported 00:23:47.755 Set Features (09h): Supported 00:23:47.755 Get Features (0Ah): Supported 00:23:47.755 Asynchronous Event Request (0Ch): Supported 00:23:47.755 Keep Alive (18h): Supported 00:23:47.755 I/O Commands 00:23:47.755 ------------ 00:23:47.755 Flush (00h): Supported LBA-Change 00:23:47.755 Write (01h): Supported LBA-Change 00:23:47.755 Read (02h): Supported 00:23:47.755 Compare (05h): Supported 00:23:47.755 Write Zeroes (08h): Supported LBA-Change 00:23:47.755 Dataset Management (09h): Supported LBA-Change 00:23:47.755 Copy (19h): Supported LBA-Change 00:23:47.755 Unknown (79h): Supported LBA-Change 00:23:47.755 Unknown (7Ah): Supported 00:23:47.755 00:23:47.755 Error Log 00:23:47.755 ========= 00:23:47.755 00:23:47.755 Arbitration 00:23:47.755 =========== 00:23:47.755 Arbitration Burst: 1 00:23:47.755 00:23:47.755 Power Management 00:23:47.755 ================ 00:23:47.755 Number of Power States: 1 00:23:47.755 Current Power State: Power State #0 00:23:47.755 Power State #0: 00:23:47.755 Max Power: 0.00 W 00:23:47.755 Non-Operational State: Operational 00:23:47.755 Entry Latency: Not Reported 00:23:47.755 Exit Latency: Not Reported 00:23:47.755 Relative Read Throughput: 0 00:23:47.755 Relative Read Latency: 0 00:23:47.755 Relative Write Throughput: 0 00:23:47.755 Relative Write Latency: 0 00:23:47.755 Idle Power: Not Reported 00:23:47.755 Active Power: Not Reported 00:23:47.755 Non-Operational Permissive Mode: Not Supported 00:23:47.755 00:23:47.755 Health Information 00:23:47.755 ================== 00:23:47.755 Critical Warnings: 00:23:47.755 Available Spare Space: OK 00:23:47.755 Temperature: OK 00:23:47.755 Device Reliability: OK 00:23:47.755 Read Only: No 00:23:47.755 Volatile Memory Backup: OK 00:23:47.755 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:47.755 Temperature Threshold: [2024-06-10 10:49:11.845503] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.755 [2024-06-10 10:49:11.845508] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1cdcec0) 00:23:47.755 [2024-06-10 10:49:11.845514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.755 [2024-06-10 10:49:11.845525] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d624b0, cid 7, qid 0 00:23:47.755 [2024-06-10 10:49:11.845747] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.755 [2024-06-10 10:49:11.845754] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.755 [2024-06-10 10:49:11.845757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.755 [2024-06-10 10:49:11.845761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d624b0) on tqpair=0x1cdcec0 00:23:47.755 [2024-06-10 10:49:11.845789] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:47.755 [2024-06-10 10:49:11.845800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.755 [2024-06-10 10:49:11.845806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.755 [2024-06-10 10:49:11.845812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.756 [2024-06-10 10:49:11.845818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.756 [2024-06-10 10:49:11.845826] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.845830] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.845833] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cdcec0) 00:23:47.756 [2024-06-10 10:49:11.845840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.756 [2024-06-10 10:49:11.845851] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61f30, cid 3, qid 0 00:23:47.756 [2024-06-10 10:49:11.846060] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.756 [2024-06-10 10:49:11.846066] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.756 [2024-06-10 10:49:11.846070] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846073] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61f30) on tqpair=0x1cdcec0 00:23:47.756 [2024-06-10 10:49:11.846081] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846084] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846088] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cdcec0) 00:23:47.756 [2024-06-10 10:49:11.846094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.756 [2024-06-10 10:49:11.846109] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61f30, cid 3, qid 0 00:23:47.756 [2024-06-10 10:49:11.846327] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.756 [2024-06-10 10:49:11.846334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.756 [2024-06-10 10:49:11.846337] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846341] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61f30) on tqpair=0x1cdcec0 00:23:47.756 [2024-06-10 10:49:11.846346] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:47.756 [2024-06-10 10:49:11.846351] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:47.756 [2024-06-10 10:49:11.846360] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846364] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846367] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cdcec0) 00:23:47.756 [2024-06-10 10:49:11.846374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.756 [2024-06-10 10:49:11.846383] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61f30, cid 3, qid 0 00:23:47.756 [2024-06-10 10:49:11.846591] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.756 [2024-06-10 10:49:11.846598] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.756 [2024-06-10 10:49:11.846601] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846605] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61f30) on tqpair=0x1cdcec0 00:23:47.756 [2024-06-10 10:49:11.846615] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846618] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846622] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cdcec0) 00:23:47.756 [2024-06-10 10:49:11.846629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.756 [2024-06-10 10:49:11.846638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61f30, cid 3, qid 0 00:23:47.756 [2024-06-10 10:49:11.846837] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.756 [2024-06-10 10:49:11.846843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.756 [2024-06-10 10:49:11.846846] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846850] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61f30) on tqpair=0x1cdcec0 00:23:47.756 [2024-06-10 10:49:11.846860] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.846867] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cdcec0) 00:23:47.756 [2024-06-10 10:49:11.846873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.756 [2024-06-10 10:49:11.846883] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61f30, cid 3, qid 0 00:23:47.756 [2024-06-10 10:49:11.847076] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.756 [2024-06-10 10:49:11.847082] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.756 [2024-06-10 10:49:11.847085] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.847089] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61f30) on tqpair=0x1cdcec0 00:23:47.756 [2024-06-10 10:49:11.847099] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.847103] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.847106] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cdcec0) 00:23:47.756 [2024-06-10 10:49:11.847115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.756 [2024-06-10 10:49:11.847125] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61f30, cid 3, qid 0 00:23:47.756 [2024-06-10 10:49:11.851252] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.756 [2024-06-10 10:49:11.851262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.756 [2024-06-10 10:49:11.851266] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.851270] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61f30) on tqpair=0x1cdcec0 00:23:47.756 [2024-06-10 10:49:11.851281] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:47.756 [2024-06-10 10:49:11.851285] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:47.757 [2024-06-10 10:49:11.851288] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cdcec0) 00:23:47.757 [2024-06-10 10:49:11.851295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.757 [2024-06-10 10:49:11.851307] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d61f30, cid 3, qid 0 00:23:47.757 [2024-06-10 10:49:11.851497] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:47.757 [2024-06-10 10:49:11.851503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:47.757 [2024-06-10 10:49:11.851506] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:47.757 [2024-06-10 10:49:11.851510] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d61f30) on tqpair=0x1cdcec0 00:23:47.757 [2024-06-10 10:49:11.851518] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:47.757 0 Kelvin (-273 Celsius) 00:23:47.757 Available Spare: 0% 00:23:47.757 Available Spare Threshold: 0% 00:23:47.757 Life Percentage Used: 0% 00:23:47.757 Data Units Read: 0 00:23:47.757 Data Units Written: 0 00:23:47.757 Host Read Commands: 0 00:23:47.757 Host Write Commands: 0 00:23:47.757 Controller Busy Time: 0 minutes 00:23:47.757 Power Cycles: 0 00:23:47.757 Power On Hours: 0 hours 00:23:47.757 Unsafe Shutdowns: 0 00:23:47.757 Unrecoverable Media Errors: 0 00:23:47.757 Lifetime Error Log Entries: 0 00:23:47.757 Warning Temperature Time: 0 minutes 00:23:47.757 Critical Temperature Time: 0 minutes 00:23:47.757 00:23:47.757 Number of Queues 00:23:47.757 ================ 00:23:47.757 Number of I/O Submission Queues: 127 00:23:47.757 Number of I/O Completion Queues: 127 00:23:47.757 00:23:47.757 Active Namespaces 00:23:47.757 ================= 00:23:47.757 Namespace ID:1 00:23:47.757 Error Recovery Timeout: Unlimited 00:23:47.757 Command Set Identifier: NVM (00h) 00:23:47.757 Deallocate: Supported 00:23:47.757 Deallocated/Unwritten Error: Not Supported 00:23:47.757 Deallocated Read Value: Unknown 00:23:47.757 Deallocate in Write Zeroes: Not Supported 00:23:47.757 Deallocated Guard Field: 0xFFFF 00:23:47.757 Flush: Supported 00:23:47.757 Reservation: Supported 00:23:47.757 Namespace Sharing Capabilities: Multiple Controllers 00:23:47.757 Size (in LBAs): 131072 (0GiB) 00:23:47.757 Capacity (in LBAs): 131072 (0GiB) 00:23:47.757 Utilization (in LBAs): 131072 (0GiB) 00:23:47.757 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:47.757 EUI64: ABCDEF0123456789 00:23:47.757 UUID: 23e34902-0ac1-481b-a69a-8ded147b2cd8 00:23:47.757 Thin Provisioning: Not Supported 00:23:47.757 Per-NS Atomic Units: Yes 00:23:47.757 Atomic Boundary Size (Normal): 0 00:23:47.757 Atomic Boundary Size (PFail): 0 00:23:47.757 Atomic Boundary Offset: 0 00:23:47.757 Maximum Single Source Range Length: 65535 00:23:47.757 Maximum Copy Length: 65535 00:23:47.757 Maximum Source Range Count: 1 00:23:47.757 NGUID/EUI64 Never Reused: No 00:23:47.757 Namespace Write Protected: No 00:23:47.757 Number of LBA Formats: 1 00:23:47.757 Current LBA Format: LBA Format #00 00:23:47.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:47.757 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.757 rmmod nvme_tcp 00:23:47.757 rmmod nvme_fabrics 00:23:47.757 rmmod nvme_keyring 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 929674 ']' 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 929674 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 929674 ']' 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 929674 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 929674 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 929674' 00:23:47.757 killing process with pid 929674 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 929674 00:23:47.757 [2024-06-10 10:49:11.994098] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:47.757 10:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 929674 00:23:48.019 10:49:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.019 10:49:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.019 10:49:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.019 10:49:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.019 10:49:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.019 10:49:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.019 10:49:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.019 10:49:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.936 10:49:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.936 00:23:49.936 real 0m11.098s 00:23:49.936 user 0m7.585s 00:23:49.936 sys 0m5.755s 00:23:49.936 10:49:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:49.936 10:49:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.936 ************************************ 00:23:49.936 END TEST nvmf_identify 00:23:49.936 ************************************ 00:23:50.199 10:49:14 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:50.199 10:49:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:50.199 10:49:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:50.199 10:49:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.199 ************************************ 00:23:50.199 START TEST nvmf_perf 00:23:50.199 ************************************ 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:50.199 * Looking for test storage... 00:23:50.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.199 10:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.200 10:49:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:58.345 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:58.345 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:58.345 Found net devices under 0000:31:00.0: cvl_0_0 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:58.345 Found net devices under 0000:31:00.1: cvl_0_1 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.825 ms 00:23:58.345 00:23:58.345 --- 10.0.0.2 ping statistics --- 00:23:58.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.345 rtt min/avg/max/mdev = 0.825/0.825/0.825/0.000 ms 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.509 ms 00:23:58.345 00:23:58.345 --- 10.0.0.1 ping statistics --- 00:23:58.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.345 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=934300 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 934300 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 934300 ']' 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:58.345 10:49:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:58.345 [2024-06-10 10:49:21.896138] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:23:58.345 [2024-06-10 10:49:21.896204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.345 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.345 [2024-06-10 10:49:21.968911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.345 [2024-06-10 10:49:22.043633] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.345 [2024-06-10 10:49:22.043671] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.345 [2024-06-10 10:49:22.043679] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.345 [2024-06-10 10:49:22.043685] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.345 [2024-06-10 10:49:22.043691] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.345 [2024-06-10 10:49:22.043833] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.345 [2024-06-10 10:49:22.043949] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.345 [2024-06-10 10:49:22.044103] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.345 [2024-06-10 10:49:22.044103] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.605 10:49:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:58.605 10:49:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:23:58.605 10:49:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.605 10:49:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:58.605 10:49:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:58.605 10:49:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.605 10:49:22 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:58.606 10:49:22 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:59.175 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:59.175 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:59.175 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:59.175 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:59.435 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:59.435 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:59.435 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:59.435 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:59.435 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:59.435 [2024-06-10 10:49:23.672601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.435 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:59.696 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:59.696 10:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:59.956 10:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:59.956 10:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:59.956 10:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.217 [2024-06-10 10:49:24.350834] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:00.217 [2024-06-10 10:49:24.351054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.217 10:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:00.477 10:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:00.477 10:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:00.477 10:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:00.477 10:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:01.863 Initializing NVMe Controllers 00:24:01.863 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:01.863 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:01.863 Initialization complete. Launching workers. 00:24:01.863 ======================================================== 00:24:01.863 Latency(us) 00:24:01.863 Device Information : IOPS MiB/s Average min max 00:24:01.863 PCIE (0000:65:00.0) NSID 1 from core 0: 79023.81 308.69 404.44 13.31 5326.48 00:24:01.863 ======================================================== 00:24:01.863 Total : 79023.81 308.69 404.44 13.31 5326.48 00:24:01.863 00:24:01.863 10:49:25 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:01.863 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.248 Initializing NVMe Controllers 00:24:03.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:03.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:03.248 Initialization complete. Launching workers. 00:24:03.248 ======================================================== 00:24:03.248 Latency(us) 00:24:03.248 Device Information : IOPS MiB/s Average min max 00:24:03.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 114.92 0.45 9047.98 306.41 45821.29 00:24:03.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.96 0.23 17474.99 4997.92 55870.48 00:24:03.248 ======================================================== 00:24:03.248 Total : 174.89 0.68 11937.24 306.41 55870.48 00:24:03.248 00:24:03.248 10:49:27 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:03.248 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.633 Initializing NVMe Controllers 00:24:04.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.633 Initialization complete. Launching workers. 00:24:04.633 ======================================================== 00:24:04.633 Latency(us) 00:24:04.633 Device Information : IOPS MiB/s Average min max 00:24:04.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10296.60 40.22 3108.54 456.12 9610.10 00:24:04.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3817.88 14.91 8395.30 6999.14 19183.67 00:24:04.633 ======================================================== 00:24:04.633 Total : 14114.48 55.13 4538.58 456.12 19183.67 00:24:04.633 00:24:04.633 10:49:28 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:04.633 10:49:28 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:04.633 10:49:28 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.633 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.254 Initializing NVMe Controllers 00:24:07.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.254 Controller IO queue size 128, less than required. 00:24:07.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.254 Controller IO queue size 128, less than required. 00:24:07.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:07.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:07.254 Initialization complete. Launching workers. 00:24:07.254 ======================================================== 00:24:07.254 Latency(us) 00:24:07.254 Device Information : IOPS MiB/s Average min max 00:24:07.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1183.49 295.87 110580.92 63875.71 193558.23 00:24:07.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.00 147.50 226460.97 103032.15 382773.67 00:24:07.254 ======================================================== 00:24:07.254 Total : 1773.49 443.37 149131.50 63875.71 382773.67 00:24:07.254 00:24:07.254 10:49:30 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:07.254 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.254 No valid NVMe controllers or AIO or URING devices found 00:24:07.254 Initializing NVMe Controllers 00:24:07.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.254 Controller IO queue size 128, less than required. 00:24:07.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.254 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:07.254 Controller IO queue size 128, less than required. 00:24:07.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.254 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:07.254 WARNING: Some requested NVMe devices were skipped 00:24:07.254 10:49:31 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:07.254 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.800 Initializing NVMe Controllers 00:24:09.800 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.800 Controller IO queue size 128, less than required. 00:24:09.800 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.800 Controller IO queue size 128, less than required. 00:24:09.800 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.800 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:09.800 Initialization complete. Launching workers. 00:24:09.800 00:24:09.800 ==================== 00:24:09.800 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:09.800 TCP transport: 00:24:09.800 polls: 37070 00:24:09.800 idle_polls: 14934 00:24:09.800 sock_completions: 22136 00:24:09.800 nvme_completions: 4319 00:24:09.800 submitted_requests: 6456 00:24:09.800 queued_requests: 1 00:24:09.800 00:24:09.800 ==================== 00:24:09.800 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:09.800 TCP transport: 00:24:09.800 polls: 38141 00:24:09.800 idle_polls: 13575 00:24:09.800 sock_completions: 24566 00:24:09.800 nvme_completions: 4767 00:24:09.800 submitted_requests: 7192 00:24:09.800 queued_requests: 1 00:24:09.800 ======================================================== 00:24:09.800 Latency(us) 00:24:09.800 Device Information : IOPS MiB/s Average min max 00:24:09.800 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1077.65 269.41 121392.26 78050.52 200919.19 00:24:09.800 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1189.46 297.36 109184.81 47712.45 144323.03 00:24:09.800 ======================================================== 00:24:09.800 Total : 2267.11 566.78 114987.52 47712.45 200919.19 00:24:09.800 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.800 rmmod nvme_tcp 00:24:09.800 rmmod nvme_fabrics 00:24:09.800 rmmod nvme_keyring 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 934300 ']' 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 934300 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 934300 ']' 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 934300 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:09.800 10:49:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 934300 00:24:09.800 10:49:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:09.800 10:49:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:09.800 10:49:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 934300' 00:24:09.800 killing process with pid 934300 00:24:09.800 10:49:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 934300 00:24:09.800 [2024-06-10 10:49:34.027361] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:09.800 10:49:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 934300 00:24:12.346 10:49:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.346 10:49:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.346 10:49:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.346 10:49:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.346 10:49:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.346 10:49:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.346 10:49:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.346 10:49:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.260 10:49:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.260 00:24:14.260 real 0m23.793s 00:24:14.260 user 0m57.656s 00:24:14.260 sys 0m7.794s 00:24:14.260 10:49:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:14.260 10:49:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:14.260 ************************************ 00:24:14.260 END TEST nvmf_perf 00:24:14.260 ************************************ 00:24:14.260 10:49:38 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:14.261 10:49:38 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:14.261 10:49:38 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:14.261 10:49:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.261 ************************************ 00:24:14.261 START TEST nvmf_fio_host 00:24:14.261 ************************************ 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:14.261 * Looking for test storage... 00:24:14.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.261 10:49:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.854 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:21.115 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:21.115 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.115 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:21.116 Found net devices under 0000:31:00.0: cvl_0_0 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:21.116 Found net devices under 0000:31:00.1: cvl_0_1 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.116 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.377 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.377 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.377 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:24:21.378 00:24:21.378 --- 10.0.0.2 ping statistics --- 00:24:21.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.378 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:24:21.378 00:24:21.378 --- 10.0.0.1 ping statistics --- 00:24:21.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.378 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=941215 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 941215 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 941215 ']' 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:21.378 10:49:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.378 [2024-06-10 10:49:45.570964] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:24:21.378 [2024-06-10 10:49:45.571026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.378 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.378 [2024-06-10 10:49:45.643714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.639 [2024-06-10 10:49:45.719493] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.639 [2024-06-10 10:49:45.719530] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.639 [2024-06-10 10:49:45.719538] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.639 [2024-06-10 10:49:45.719545] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.639 [2024-06-10 10:49:45.719551] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.639 [2024-06-10 10:49:45.719693] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.639 [2024-06-10 10:49:45.719811] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.639 [2024-06-10 10:49:45.719967] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.639 [2024-06-10 10:49:45.719968] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.212 10:49:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:22.212 10:49:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:24:22.212 10:49:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:22.212 [2024-06-10 10:49:46.479079] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.473 10:49:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:22.473 10:49:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:22.473 10:49:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.473 10:49:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:22.473 Malloc1 00:24:22.473 10:49:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:22.733 10:49:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:22.994 10:49:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.994 [2024-06-10 10:49:47.192467] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:22.994 [2024-06-10 10:49:47.192719] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.994 10:49:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:23.255 10:49:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:23.515 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:23.516 fio-3.35 00:24:23.516 Starting 1 thread 00:24:23.516 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.060 00:24:26.060 test: (groupid=0, jobs=1): err= 0: pid=941913: Mon Jun 10 10:49:50 2024 00:24:26.060 read: IOPS=13.9k, BW=54.1MiB/s (56.7MB/s)(109MiB/2005msec) 00:24:26.060 slat (usec): min=2, max=244, avg= 2.18, stdev= 2.13 00:24:26.060 clat (usec): min=3650, max=9087, avg=5085.07, stdev=376.54 00:24:26.060 lat (usec): min=3653, max=9089, avg=5087.25, stdev=376.66 00:24:26.060 clat percentiles (usec): 00:24:26.060 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4817], 00:24:26.060 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:24:26.060 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5669], 00:24:26.060 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 7963], 99.95th=[ 8225], 00:24:26.060 | 99.99th=[ 8717] 00:24:26.060 bw ( KiB/s): min=54128, max=55904, per=100.00%, avg=55432.00, stdev=870.62, samples=4 00:24:26.060 iops : min=13532, max=13976, avg=13858.00, stdev=217.65, samples=4 00:24:26.060 write: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(109MiB/2005msec); 0 zone resets 00:24:26.060 slat (usec): min=2, max=242, avg= 2.28, stdev= 1.60 00:24:26.060 clat (usec): min=2598, max=8669, avg=4088.83, stdev=320.03 00:24:26.060 lat (usec): min=2613, max=8671, avg=4091.11, stdev=320.19 00:24:26.060 clat percentiles (usec): 00:24:26.060 | 1.00th=[ 3326], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3851], 00:24:26.060 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:24:26.060 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:24:26.060 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 6652], 99.95th=[ 7242], 00:24:26.060 | 99.99th=[ 8291] 00:24:26.060 bw ( KiB/s): min=54440, max=56000, per=100.00%, avg=55466.00, stdev=697.83, samples=4 00:24:26.060 iops : min=13610, max=14000, avg=13866.50, stdev=174.46, samples=4 00:24:26.060 lat (msec) : 4=18.79%, 10=81.21% 00:24:26.060 cpu : usr=70.51%, sys=25.15%, ctx=28, majf=0, minf=6 00:24:26.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:26.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:26.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:26.060 issued rwts: total=27778,27798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:26.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:26.060 00:24:26.060 Run status group 0 (all jobs): 00:24:26.060 READ: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=109MiB (114MB), run=2005-2005msec 00:24:26.060 WRITE: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2005-2005msec 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:26.060 10:49:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:26.320 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:26.320 fio-3.35 00:24:26.320 Starting 1 thread 00:24:26.320 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.864 00:24:28.864 test: (groupid=0, jobs=1): err= 0: pid=942571: Mon Jun 10 10:49:52 2024 00:24:28.864 read: IOPS=9113, BW=142MiB/s (149MB/s)(286MiB/2005msec) 00:24:28.864 slat (usec): min=3, max=110, avg= 3.65, stdev= 1.59 00:24:28.864 clat (usec): min=1593, max=20281, avg=8776.74, stdev=2285.21 00:24:28.864 lat (usec): min=1597, max=20285, avg=8780.39, stdev=2285.41 00:24:28.864 clat percentiles (usec): 00:24:28.864 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6783], 00:24:28.864 | 30.00th=[ 7373], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:24:28.864 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[12125], 95.00th=[12518], 00:24:28.864 | 99.00th=[14091], 99.50th=[15008], 99.90th=[16712], 99.95th=[17433], 00:24:28.864 | 99.99th=[18482] 00:24:28.864 bw ( KiB/s): min=60096, max=84448, per=49.13%, avg=71640.00, stdev=10043.77, samples=4 00:24:28.864 iops : min= 3756, max= 5278, avg=4477.50, stdev=627.74, samples=4 00:24:28.864 write: IOPS=5480, BW=85.6MiB/s (89.8MB/s)(146MiB/1700msec); 0 zone resets 00:24:28.864 slat (usec): min=40, max=365, avg=41.16, stdev= 7.51 00:24:28.864 clat (usec): min=2392, max=17131, avg=9400.78, stdev=1624.03 00:24:28.864 lat (usec): min=2432, max=17270, avg=9441.94, stdev=1626.54 00:24:28.864 clat percentiles (usec): 00:24:28.864 | 1.00th=[ 6259], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8094], 00:24:28.864 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:24:28.864 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12125], 00:24:28.864 | 99.00th=[14484], 99.50th=[15664], 99.90th=[16581], 99.95th=[16909], 00:24:28.864 | 99.99th=[17171] 00:24:28.864 bw ( KiB/s): min=63008, max=87840, per=85.00%, avg=74536.00, stdev=10311.23, samples=4 00:24:28.864 iops : min= 3938, max= 5490, avg=4658.50, stdev=644.45, samples=4 00:24:28.864 lat (msec) : 2=0.03%, 4=0.43%, 10=70.04%, 20=29.49%, 50=0.01% 00:24:28.864 cpu : usr=83.89%, sys=13.27%, ctx=14, majf=0, minf=23 00:24:28.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:28.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:28.864 issued rwts: total=18272,9317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:28.864 00:24:28.864 Run status group 0 (all jobs): 00:24:28.864 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=286MiB (299MB), run=2005-2005msec 00:24:28.864 WRITE: bw=85.6MiB/s (89.8MB/s), 85.6MiB/s-85.6MiB/s (89.8MB/s-89.8MB/s), io=146MiB (153MB), run=1700-1700msec 00:24:28.864 10:49:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.126 rmmod nvme_tcp 00:24:29.126 rmmod nvme_fabrics 00:24:29.126 rmmod nvme_keyring 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 941215 ']' 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 941215 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 941215 ']' 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 941215 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 941215 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 941215' 00:24:29.126 killing process with pid 941215 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 941215 00:24:29.126 [2024-06-10 10:49:53.312925] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:29.126 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 941215 00:24:29.387 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:29.387 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:29.387 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:29.387 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.387 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:29.387 10:49:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.387 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.387 10:49:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.301 10:49:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.301 00:24:31.301 real 0m17.371s 00:24:31.301 user 1m8.772s 00:24:31.301 sys 0m7.341s 00:24:31.301 10:49:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:31.301 10:49:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.301 ************************************ 00:24:31.301 END TEST nvmf_fio_host 00:24:31.301 ************************************ 00:24:31.301 10:49:55 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:31.301 10:49:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:31.301 10:49:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:31.301 10:49:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:31.563 ************************************ 00:24:31.563 START TEST nvmf_failover 00:24:31.563 ************************************ 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:31.563 * Looking for test storage... 00:24:31.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.563 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.564 10:49:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:39.706 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:39.707 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:39.707 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:39.707 Found net devices under 0000:31:00.0: cvl_0_0 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:39.707 Found net devices under 0000:31:00.1: cvl_0_1 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:24:39.707 00:24:39.707 --- 10.0.0.2 ping statistics --- 00:24:39.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.707 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:24:39.707 00:24:39.707 --- 10.0.0.1 ping statistics --- 00:24:39.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.707 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.707 10:50:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=947288 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 947288 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 947288 ']' 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:39.707 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.707 [2024-06-10 10:50:03.088285] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:24:39.707 [2024-06-10 10:50:03.088374] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.707 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.707 [2024-06-10 10:50:03.182062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:39.707 [2024-06-10 10:50:03.275433] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.708 [2024-06-10 10:50:03.275492] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.708 [2024-06-10 10:50:03.275501] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.708 [2024-06-10 10:50:03.275507] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.708 [2024-06-10 10:50:03.275514] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.708 [2024-06-10 10:50:03.275644] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.708 [2024-06-10 10:50:03.275809] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.708 [2024-06-10 10:50:03.275809] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:24:39.708 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:39.708 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:24:39.708 10:50:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:39.708 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:39.708 10:50:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.708 10:50:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.708 10:50:03 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:39.968 [2024-06-10 10:50:04.025799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.968 10:50:04 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:39.968 Malloc0 00:24:39.968 10:50:04 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.230 10:50:04 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.491 10:50:04 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.491 [2024-06-10 10:50:04.717501] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:40.491 [2024-06-10 10:50:04.717730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.491 10:50:04 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:40.752 [2024-06-10 10:50:04.878118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:40.752 10:50:04 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.752 [2024-06-10 10:50:05.038657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=947654 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 947654 /var/tmp/bdevperf.sock 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 947654 ']' 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:41.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:41.013 10:50:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:41.955 10:50:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:41.955 10:50:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:24:41.955 10:50:05 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.955 NVMe0n1 00:24:41.955 10:50:06 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:42.526 00:24:42.526 10:50:06 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=947997 00:24:42.526 10:50:06 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:42.526 10:50:06 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:43.468 10:50:07 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.468 [2024-06-10 10:50:07.720407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720455] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720460] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720464] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720473] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720478] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720482] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720486] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720491] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720495] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720499] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720503] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720512] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720516] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720520] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720525] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720529] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720533] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720538] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720542] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720546] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720556] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720560] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720564] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720568] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720574] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720579] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720583] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720592] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720596] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720604] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720609] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720614] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720618] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720623] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720627] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720636] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720640] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720657] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720661] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720666] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720680] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720689] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720694] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720702] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720706] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 [2024-06-10 10:50:07.720711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f92b0 is same with the state(5) to be set 00:24:43.469 10:50:07 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:46.843 10:50:10 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.843 00:24:46.843 10:50:11 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:47.104 [2024-06-10 10:50:11.246266] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246302] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246307] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246312] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246330] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246351] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246369] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.104 [2024-06-10 10:50:11.246387] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246391] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246404] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246413] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246417] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246421] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246426] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246433] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246437] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246441] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246446] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246450] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246454] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246459] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246463] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246467] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246472] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246481] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246486] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246491] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246495] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246500] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246504] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246513] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246517] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246521] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246525] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246532] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246536] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246540] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246545] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246554] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246559] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246563] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246567] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246571] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246575] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246580] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246586] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246591] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246595] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246602] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246607] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246611] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246616] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246625] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246638] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246648] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246655] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246660] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246664] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246668] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246678] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246697] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 [2024-06-10 10:50:11.246709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa120 is same with the state(5) to be set 00:24:47.105 10:50:11 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:50.408 10:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.408 [2024-06-10 10:50:14.419429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.408 10:50:14 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:51.352 10:50:15 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:51.352 [2024-06-10 10:50:15.595753] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595789] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595794] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595808] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595813] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595817] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595826] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595834] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595860] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595869] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595873] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595882] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595886] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595891] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595900] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595904] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595923] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595932] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595957] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595966] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595970] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595985] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595989] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.595998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596002] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596007] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596011] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596020] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596025] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596042] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596052] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596066] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596070] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596083] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596114] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596125] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596143] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.352 [2024-06-10 10:50:15.596147] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596152] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596156] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596161] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596167] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596171] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596178] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596189] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596194] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596199] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596204] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596209] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596215] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596220] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596245] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596254] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596258] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596262] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596267] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596271] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596275] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596280] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596284] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596307] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596329] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596334] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596338] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596361] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 [2024-06-10 10:50:15.596370] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fb010 is same with the state(5) to be set 00:24:51.353 10:50:15 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 947997 00:24:57.950 0 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 947654 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 947654 ']' 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 947654 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 947654 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 947654' 00:24:57.950 killing process with pid 947654 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 947654 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 947654 00:24:57.950 10:50:21 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.950 [2024-06-10 10:50:05.124708] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:24:57.950 [2024-06-10 10:50:05.124802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947654 ] 00:24:57.950 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.950 [2024-06-10 10:50:05.186150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.950 [2024-06-10 10:50:05.250390] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.950 Running I/O for 15 seconds... 00:24:57.950 [2024-06-10 10:50:07.722974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.950 [2024-06-10 10:50:07.723171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.950 [2024-06-10 10:50:07.723180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.951 [2024-06-10 10:50:07.723790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.951 [2024-06-10 10:50:07.723799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.723992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.723999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.952 [2024-06-10 10:50:07.724417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.952 [2024-06-10 10:50:07.724426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.953 [2024-06-10 10:50:07.724942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.953 [2024-06-10 10:50:07.724970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97296 len:8 PRP1 0x0 PRP2 0x0 00:24:57.953 [2024-06-10 10:50:07.724977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.724988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.953 [2024-06-10 10:50:07.724993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.953 [2024-06-10 10:50:07.724999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97304 len:8 PRP1 0x0 PRP2 0x0 00:24:57.953 [2024-06-10 10:50:07.725006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.725013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.953 [2024-06-10 10:50:07.725019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.953 [2024-06-10 10:50:07.725025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:24:57.953 [2024-06-10 10:50:07.725032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.953 [2024-06-10 10:50:07.725040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.954 [2024-06-10 10:50:07.725045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.954 [2024-06-10 10:50:07.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:24:57.954 [2024-06-10 10:50:07.725058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.954 [2024-06-10 10:50:07.725071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.954 [2024-06-10 10:50:07.725077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 PRP1 0x0 PRP2 0x0 00:24:57.954 [2024-06-10 10:50:07.725085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.954 [2024-06-10 10:50:07.725099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.954 [2024-06-10 10:50:07.725105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97336 len:8 PRP1 0x0 PRP2 0x0 00:24:57.954 [2024-06-10 10:50:07.725112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.954 [2024-06-10 10:50:07.725126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.954 [2024-06-10 10:50:07.725132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97344 len:8 PRP1 0x0 PRP2 0x0 00:24:57.954 [2024-06-10 10:50:07.725140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.954 [2024-06-10 10:50:07.725152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.954 [2024-06-10 10:50:07.725158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97352 len:8 PRP1 0x0 PRP2 0x0 00:24:57.954 [2024-06-10 10:50:07.725166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.954 [2024-06-10 10:50:07.725179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.954 [2024-06-10 10:50:07.725185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97360 len:8 PRP1 0x0 PRP2 0x0 00:24:57.954 [2024-06-10 10:50:07.725192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.954 [2024-06-10 10:50:07.725204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.954 [2024-06-10 10:50:07.725210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97368 len:8 PRP1 0x0 PRP2 0x0 00:24:57.954 [2024-06-10 10:50:07.725217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.954 [2024-06-10 10:50:07.725231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.954 [2024-06-10 10:50:07.725237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97376 len:8 PRP1 0x0 PRP2 0x0 00:24:57.954 [2024-06-10 10:50:07.725247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.954 [2024-06-10 10:50:07.725260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.954 [2024-06-10 10:50:07.725265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97384 len:8 PRP1 0x0 PRP2 0x0 00:24:57.954 [2024-06-10 10:50:07.725272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725309] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e86670 was disconnected and freed. reset controller. 00:24:57.954 [2024-06-10 10:50:07.725318] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:57.954 [2024-06-10 10:50:07.725337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.954 [2024-06-10 10:50:07.725345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.954 [2024-06-10 10:50:07.725363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.954 [2024-06-10 10:50:07.725377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.954 [2024-06-10 10:50:07.725393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:07.725401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.954 [2024-06-10 10:50:07.728984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.954 [2024-06-10 10:50:07.729010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65a90 (9): Bad file descriptor 00:24:57.954 [2024-06-10 10:50:07.804225] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.954 [2024-06-10 10:50:11.247185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.954 [2024-06-10 10:50:11.247471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.954 [2024-06-10 10:50:11.247481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.955 [2024-06-10 10:50:11.247959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.955 [2024-06-10 10:50:11.247965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.247975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.247984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.247993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.956 [2024-06-10 10:50:11.248575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.956 [2024-06-10 10:50:11.248584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.957 [2024-06-10 10:50:11.248590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.957 [2024-06-10 10:50:11.248608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.957 [2024-06-10 10:50:11.248624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.957 [2024-06-10 10:50:11.248640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.957 [2024-06-10 10:50:11.248656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.957 [2024-06-10 10:50:11.248672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.957 [2024-06-10 10:50:11.248688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.957 [2024-06-10 10:50:11.248704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.957 [2024-06-10 10:50:11.248721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.248985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.248995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.957 [2024-06-10 10:50:11.249189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.957 [2024-06-10 10:50:11.249196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.958 [2024-06-10 10:50:11.249212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.958 [2024-06-10 10:50:11.249228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.958 [2024-06-10 10:50:11.249249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.958 [2024-06-10 10:50:11.249267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.958 [2024-06-10 10:50:11.249283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.958 [2024-06-10 10:50:11.249300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.958 [2024-06-10 10:50:11.249316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.958 [2024-06-10 10:50:11.249346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.958 [2024-06-10 10:50:11.249353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40328 len:8 PRP1 0x0 PRP2 0x0 00:24:57.958 [2024-06-10 10:50:11.249360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249398] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e885e0 was disconnected and freed. reset controller. 00:24:57.958 [2024-06-10 10:50:11.249407] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:57.958 [2024-06-10 10:50:11.249425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.958 [2024-06-10 10:50:11.249434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.958 [2024-06-10 10:50:11.249449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.958 [2024-06-10 10:50:11.249464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.958 [2024-06-10 10:50:11.249479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:11.249487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.958 [2024-06-10 10:50:11.253067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.958 [2024-06-10 10:50:11.253094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65a90 (9): Bad file descriptor 00:24:57.958 [2024-06-10 10:50:11.329107] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.958 [2024-06-10 10:50:15.596657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.596990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.596999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.597006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.958 [2024-06-10 10:50:15.597017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.958 [2024-06-10 10:50:15.597024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.959 [2024-06-10 10:50:15.597604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.959 [2024-06-10 10:50:15.597611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.597985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.597993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.598001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.598010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.598017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.598026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.598032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.598041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.598049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.598058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.598065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.598073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.960 [2024-06-10 10:50:15.598080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.960 [2024-06-10 10:50:15.598089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.961 [2024-06-10 10:50:15.598330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.961 [2024-06-10 10:50:15.598658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.961 [2024-06-10 10:50:15.598665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.962 [2024-06-10 10:50:15.598680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.962 [2024-06-10 10:50:15.598698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.962 [2024-06-10 10:50:15.598714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.962 [2024-06-10 10:50:15.598731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.962 [2024-06-10 10:50:15.598747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.962 [2024-06-10 10:50:15.598765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.962 [2024-06-10 10:50:15.598780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.962 [2024-06-10 10:50:15.598811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.962 [2024-06-10 10:50:15.598818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59704 len:8 PRP1 0x0 PRP2 0x0 00:24:57.962 [2024-06-10 10:50:15.598826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598865] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e888f0 was disconnected and freed. reset controller. 00:24:57.962 [2024-06-10 10:50:15.598874] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:57.962 [2024-06-10 10:50:15.598893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.962 [2024-06-10 10:50:15.598901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.962 [2024-06-10 10:50:15.598916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.962 [2024-06-10 10:50:15.598931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.962 [2024-06-10 10:50:15.598946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.962 [2024-06-10 10:50:15.598953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.962 [2024-06-10 10:50:15.602530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.962 [2024-06-10 10:50:15.602557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65a90 (9): Bad file descriptor 00:24:57.962 [2024-06-10 10:50:15.768639] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.962 00:24:57.962 Latency(us) 00:24:57.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.962 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:57.962 Verification LBA range: start 0x0 length 0x4000 00:24:57.962 NVMe0n1 : 15.05 11109.46 43.40 757.87 0.00 10729.64 549.55 42816.85 00:24:57.962 =================================================================================================================== 00:24:57.962 Total : 11109.46 43.40 757.87 0.00 10729.64 549.55 42816.85 00:24:57.962 Received shutdown signal, test time was about 15.000000 seconds 00:24:57.962 00:24:57.962 Latency(us) 00:24:57.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.962 =================================================================================================================== 00:24:57.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=951008 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 951008 /var/tmp/bdevperf.sock 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 951008 ']' 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:57.962 10:50:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:58.533 10:50:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:58.533 10:50:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:24:58.533 10:50:22 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:58.792 [2024-06-10 10:50:22.924540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.792 10:50:22 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:59.053 [2024-06-10 10:50:23.092916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:59.053 10:50:23 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.313 NVMe0n1 00:24:59.313 10:50:23 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.884 00:24:59.884 10:50:23 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.884 00:25:00.145 10:50:24 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.145 10:50:24 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:00.145 10:50:24 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:00.406 10:50:24 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:03.708 10:50:27 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.708 10:50:27 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:03.708 10:50:27 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.709 10:50:27 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=952026 00:25:03.709 10:50:27 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 952026 00:25:04.650 0 00:25:04.650 10:50:28 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:04.650 [2024-06-10 10:50:22.005154] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:25:04.650 [2024-06-10 10:50:22.005213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951008 ] 00:25:04.650 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.650 [2024-06-10 10:50:22.065142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.650 [2024-06-10 10:50:22.128884] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.650 [2024-06-10 10:50:24.482602] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:04.650 [2024-06-10 10:50:24.482647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.650 [2024-06-10 10:50:24.482658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.650 [2024-06-10 10:50:24.482667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.650 [2024-06-10 10:50:24.482675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.650 [2024-06-10 10:50:24.482683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.650 [2024-06-10 10:50:24.482690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.650 [2024-06-10 10:50:24.482698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.650 [2024-06-10 10:50:24.482705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.650 [2024-06-10 10:50:24.482713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.650 [2024-06-10 10:50:24.482740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.650 [2024-06-10 10:50:24.482754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133ba90 (9): Bad file descriptor 00:25:04.650 [2024-06-10 10:50:24.617461] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:04.650 Running I/O for 1 seconds... 00:25:04.650 00:25:04.650 Latency(us) 00:25:04.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.650 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:04.650 Verification LBA range: start 0x0 length 0x4000 00:25:04.650 NVMe0n1 : 1.01 11241.17 43.91 0.00 0.00 11333.36 2607.79 11031.89 00:25:04.650 =================================================================================================================== 00:25:04.650 Total : 11241.17 43.91 0.00 0.00 11333.36 2607.79 11031.89 00:25:04.650 10:50:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:04.650 10:50:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:04.911 10:50:28 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.911 10:50:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:04.911 10:50:29 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:05.171 10:50:29 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.432 10:50:29 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 951008 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 951008 ']' 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 951008 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 951008 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 951008' 00:25:08.749 killing process with pid 951008 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 951008 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 951008 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:08.749 10:50:32 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.749 10:50:33 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:08.749 10:50:33 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:08.749 10:50:33 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:08.749 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.749 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:08.749 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.749 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:08.749 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.749 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.749 rmmod nvme_tcp 00:25:09.010 rmmod nvme_fabrics 00:25:09.010 rmmod nvme_keyring 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 947288 ']' 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 947288 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 947288 ']' 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 947288 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 947288 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 947288' 00:25:09.010 killing process with pid 947288 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 947288 00:25:09.010 [2024-06-10 10:50:33.136734] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 947288 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.010 10:50:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.556 10:50:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.556 00:25:11.556 real 0m39.728s 00:25:11.556 user 2m2.661s 00:25:11.556 sys 0m8.087s 00:25:11.556 10:50:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:11.556 10:50:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:11.556 ************************************ 00:25:11.556 END TEST nvmf_failover 00:25:11.556 ************************************ 00:25:11.556 10:50:35 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:11.556 10:50:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:11.556 10:50:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:11.556 10:50:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.556 ************************************ 00:25:11.556 START TEST nvmf_host_discovery 00:25:11.556 ************************************ 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:11.556 * Looking for test storage... 00:25:11.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.556 10:50:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.557 10:50:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:18.148 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:18.148 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.148 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:18.409 Found net devices under 0000:31:00.0: cvl_0_0 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:18.409 Found net devices under 0000:31:00.1: cvl_0_1 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.409 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:25:18.670 00:25:18.670 --- 10.0.0.2 ping statistics --- 00:25:18.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.670 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:25:18.670 00:25:18.670 --- 10.0.0.1 ping statistics --- 00:25:18.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.670 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=957416 00:25:18.670 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 957416 00:25:18.671 10:50:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:18.671 10:50:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 957416 ']' 00:25:18.671 10:50:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.671 10:50:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:18.671 10:50:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.671 10:50:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:18.671 10:50:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.671 [2024-06-10 10:50:42.857640] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:25:18.671 [2024-06-10 10:50:42.857702] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.671 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.671 [2024-06-10 10:50:42.947022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.932 [2024-06-10 10:50:43.038984] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.932 [2024-06-10 10:50:43.039038] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.932 [2024-06-10 10:50:43.039045] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.932 [2024-06-10 10:50:43.039053] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.932 [2024-06-10 10:50:43.039059] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.932 [2024-06-10 10:50:43.039092] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.504 [2024-06-10 10:50:43.690055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.504 [2024-06-10 10:50:43.698023] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:19.504 [2024-06-10 10:50:43.698289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.504 null0 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.504 null1 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=957462 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 957462 /tmp/host.sock 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 957462 ']' 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:19.504 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:19.504 10:50:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.504 [2024-06-10 10:50:43.780228] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:25:19.504 [2024-06-10 10:50:43.780296] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957462 ] 00:25:19.765 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.765 [2024-06-10 10:50:43.844570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.765 [2024-06-10 10:50:43.919928] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.337 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.598 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.599 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.859 [2024-06-10 10:50:44.897292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.859 10:50:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:20.859 10:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.860 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.860 10:50:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.860 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:20.860 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:25:20.860 10:50:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:21.429 [2024-06-10 10:50:45.610153] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:21.429 [2024-06-10 10:50:45.610173] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:21.429 [2024-06-10 10:50:45.610186] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:21.689 [2024-06-10 10:50:45.739598] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:21.689 [2024-06-10 10:50:45.881495] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:21.689 [2024-06-10 10:50:45.881519] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:21.949 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:21.950 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.210 [2024-06-10 10:50:46.425281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.210 [2024-06-10 10:50:46.426278] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:22.210 [2024-06-10 10:50:46.426305] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:22.210 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.211 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:22.471 [2024-06-10 10:50:46.556077] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:22.471 10:50:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:25:22.471 [2024-06-10 10:50:46.613840] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:22.471 [2024-06-10 10:50:46.613862] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:22.471 [2024-06-10 10:50:46.613868] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.414 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.676 [2024-06-10 10:50:47.705057] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:23.676 [2024-06-10 10:50:47.705081] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.676 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:23.677 [2024-06-10 10:50:47.713168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.677 [2024-06-10 10:50:47.713187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.677 [2024-06-10 10:50:47.713197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.677 [2024-06-10 10:50:47.713204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.677 [2024-06-10 10:50:47.713212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.677 [2024-06-10 10:50:47.713218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.677 [2024-06-10 10:50:47.713226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.677 [2024-06-10 10:50:47.713233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.677 [2024-06-10 10:50:47.713250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1225050 is same with the state(5) to be set 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:23.677 [2024-06-10 10:50:47.723181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1225050 (9): Bad file descriptor 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.677 [2024-06-10 10:50:47.733222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.677 [2024-06-10 10:50:47.733685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.677 [2024-06-10 10:50:47.733702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1225050 with addr=10.0.0.2, port=4420 00:25:23.677 [2024-06-10 10:50:47.733710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1225050 is same with the state(5) to be set 00:25:23.677 [2024-06-10 10:50:47.733722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1225050 (9): Bad file descriptor 00:25:23.677 [2024-06-10 10:50:47.733732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.677 [2024-06-10 10:50:47.733739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.677 [2024-06-10 10:50:47.733746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.677 [2024-06-10 10:50:47.733758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.677 [2024-06-10 10:50:47.743276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.677 [2024-06-10 10:50:47.743674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.677 [2024-06-10 10:50:47.743686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1225050 with addr=10.0.0.2, port=4420 00:25:23.677 [2024-06-10 10:50:47.743694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1225050 is same with the state(5) to be set 00:25:23.677 [2024-06-10 10:50:47.743705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1225050 (9): Bad file descriptor 00:25:23.677 [2024-06-10 10:50:47.743715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.677 [2024-06-10 10:50:47.743721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.677 [2024-06-10 10:50:47.743728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.677 [2024-06-10 10:50:47.743739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.677 [2024-06-10 10:50:47.753331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.677 [2024-06-10 10:50:47.753696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.677 [2024-06-10 10:50:47.753711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1225050 with addr=10.0.0.2, port=4420 00:25:23.677 [2024-06-10 10:50:47.753719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1225050 is same with the state(5) to be set 00:25:23.677 [2024-06-10 10:50:47.753731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1225050 (9): Bad file descriptor 00:25:23.677 [2024-06-10 10:50:47.753745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.677 [2024-06-10 10:50:47.753752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.677 [2024-06-10 10:50:47.753759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.677 [2024-06-10 10:50:47.753771] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.677 [2024-06-10 10:50:47.763386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:23.677 [2024-06-10 10:50:47.763791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.677 [2024-06-10 10:50:47.763804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1225050 with addr=10.0.0.2, port=4420 00:25:23.677 [2024-06-10 10:50:47.763811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1225050 is same with the state(5) to be set 00:25:23.677 [2024-06-10 10:50:47.763822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1225050 (9): Bad file descriptor 00:25:23.677 [2024-06-10 10:50:47.763832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.677 [2024-06-10 10:50:47.763838] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.677 [2024-06-10 10:50:47.763845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.677 [2024-06-10 10:50:47.763855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.677 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.677 [2024-06-10 10:50:47.773438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.677 [2024-06-10 10:50:47.773796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.677 [2024-06-10 10:50:47.773809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1225050 with addr=10.0.0.2, port=4420 00:25:23.677 [2024-06-10 10:50:47.773817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1225050 is same with the state(5) to be set 00:25:23.677 [2024-06-10 10:50:47.773827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1225050 (9): Bad file descriptor 00:25:23.677 [2024-06-10 10:50:47.773838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.677 [2024-06-10 10:50:47.773844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.677 [2024-06-10 10:50:47.773854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.677 [2024-06-10 10:50:47.773865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.677 [2024-06-10 10:50:47.783490] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.677 [2024-06-10 10:50:47.783715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.677 [2024-06-10 10:50:47.783727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1225050 with addr=10.0.0.2, port=4420 00:25:23.677 [2024-06-10 10:50:47.783734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1225050 is same with the state(5) to be set 00:25:23.678 [2024-06-10 10:50:47.783746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1225050 (9): Bad file descriptor 00:25:23.678 [2024-06-10 10:50:47.783757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.678 [2024-06-10 10:50:47.783764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.678 [2024-06-10 10:50:47.783771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.678 [2024-06-10 10:50:47.783781] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.678 [2024-06-10 10:50:47.793544] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:23.678 [2024-06-10 10:50:47.793782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.678 [2024-06-10 10:50:47.793798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1225050 with addr=10.0.0.2, port=4420 00:25:23.678 [2024-06-10 10:50:47.793806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1225050 is same with the state(5) to be set 00:25:23.678 [2024-06-10 10:50:47.793818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1225050 (9): Bad file descriptor 00:25:23.678 [2024-06-10 10:50:47.793829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:23.678 [2024-06-10 10:50:47.793835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:23.678 [2024-06-10 10:50:47.793842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:23.678 [2024-06-10 10:50:47.793861] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.678 [2024-06-10 10:50:47.794853] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:23.678 [2024-06-10 10:50:47.794871] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:23.678 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.939 10:50:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:23.939 10:50:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.880 [2024-06-10 10:50:49.159420] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:24.880 [2024-06-10 10:50:49.159443] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:24.880 [2024-06-10 10:50:49.159456] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:25.140 [2024-06-10 10:50:49.247729] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:25.140 [2024-06-10 10:50:49.352555] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:25.140 [2024-06-10 10:50:49.352586] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.140 request: 00:25:25.140 { 00:25:25.140 "name": "nvme", 00:25:25.140 "trtype": "tcp", 00:25:25.140 "traddr": "10.0.0.2", 00:25:25.140 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:25.140 "adrfam": "ipv4", 00:25:25.140 "trsvcid": "8009", 00:25:25.140 "wait_for_attach": true, 00:25:25.140 "method": "bdev_nvme_start_discovery", 00:25:25.140 "req_id": 1 00:25:25.140 } 00:25:25.140 Got JSON-RPC error response 00:25:25.140 response: 00:25:25.140 { 00:25:25.140 "code": -17, 00:25:25.140 "message": "File exists" 00:25:25.140 } 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:25.140 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.141 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:25.141 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.141 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.141 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.401 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.401 request: 00:25:25.401 { 00:25:25.401 "name": "nvme_second", 00:25:25.401 "trtype": "tcp", 00:25:25.402 "traddr": "10.0.0.2", 00:25:25.402 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:25.402 "adrfam": "ipv4", 00:25:25.402 "trsvcid": "8009", 00:25:25.402 "wait_for_attach": true, 00:25:25.402 "method": "bdev_nvme_start_discovery", 00:25:25.402 "req_id": 1 00:25:25.402 } 00:25:25.402 Got JSON-RPC error response 00:25:25.402 response: 00:25:25.402 { 00:25:25.402 "code": -17, 00:25:25.402 "message": "File exists" 00:25:25.402 } 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:25.402 10:50:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.344 [2024-06-10 10:50:50.612137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.344 [2024-06-10 10:50:50.612174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1404bc0 with addr=10.0.0.2, port=8010 00:25:26.344 [2024-06-10 10:50:50.612189] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:26.344 [2024-06-10 10:50:50.612197] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:26.344 [2024-06-10 10:50:50.612204] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:27.768 [2024-06-10 10:50:51.614448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.768 [2024-06-10 10:50:51.614474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1221100 with addr=10.0.0.2, port=8010 00:25:27.768 [2024-06-10 10:50:51.614486] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:27.768 [2024-06-10 10:50:51.614493] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:27.768 [2024-06-10 10:50:51.614500] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:28.366 [2024-06-10 10:50:52.616383] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:28.366 request: 00:25:28.366 { 00:25:28.366 "name": "nvme_second", 00:25:28.366 "trtype": "tcp", 00:25:28.366 "traddr": "10.0.0.2", 00:25:28.366 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:28.366 "adrfam": "ipv4", 00:25:28.366 "trsvcid": "8010", 00:25:28.366 "attach_timeout_ms": 3000, 00:25:28.366 "method": "bdev_nvme_start_discovery", 00:25:28.366 "req_id": 1 00:25:28.366 } 00:25:28.366 Got JSON-RPC error response 00:25:28.366 response: 00:25:28.366 { 00:25:28.366 "code": -110, 00:25:28.366 "message": "Connection timed out" 00:25:28.366 } 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:28.366 10:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:28.367 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.367 10:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:28.367 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 957462 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.626 rmmod nvme_tcp 00:25:28.626 rmmod nvme_fabrics 00:25:28.626 rmmod nvme_keyring 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 957416 ']' 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 957416 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 957416 ']' 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 957416 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 957416 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:28.626 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 957416' 00:25:28.627 killing process with pid 957416 00:25:28.627 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 957416 00:25:28.627 [2024-06-10 10:50:52.809582] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:28.627 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 957416 00:25:28.887 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.887 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.887 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.887 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.887 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.887 10:50:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.887 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.887 10:50:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.797 10:50:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.797 00:25:30.797 real 0m19.570s 00:25:30.797 user 0m22.791s 00:25:30.797 sys 0m6.784s 00:25:30.797 10:50:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:30.797 10:50:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.797 ************************************ 00:25:30.797 END TEST nvmf_host_discovery 00:25:30.797 ************************************ 00:25:30.797 10:50:55 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:30.797 10:50:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:30.797 10:50:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:30.797 10:50:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.797 ************************************ 00:25:30.797 START TEST nvmf_host_multipath_status 00:25:30.797 ************************************ 00:25:30.797 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:31.059 * Looking for test storage... 00:25:31.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:31.059 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.060 10:50:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:39.203 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:39.203 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:39.203 Found net devices under 0000:31:00.0: cvl_0_0 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:39.203 Found net devices under 0000:31:00.1: cvl_0_1 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:39.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:25:39.203 00:25:39.203 --- 10.0.0.2 ping statistics --- 00:25:39.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.203 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:25:39.203 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:25:39.203 00:25:39.203 --- 10.0.0.1 ping statistics --- 00:25:39.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.204 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=963790 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 963790 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 963790 ']' 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:39.204 10:51:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.204 [2024-06-10 10:51:02.546423] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:25:39.204 [2024-06-10 10:51:02.546490] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.204 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.204 [2024-06-10 10:51:02.618858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:39.204 [2024-06-10 10:51:02.693067] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.204 [2024-06-10 10:51:02.693104] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.204 [2024-06-10 10:51:02.693112] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.204 [2024-06-10 10:51:02.693118] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.204 [2024-06-10 10:51:02.693124] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.204 [2024-06-10 10:51:02.693278] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.204 [2024-06-10 10:51:02.693309] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.204 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:39.204 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:25:39.204 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:39.204 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:39.204 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:39.204 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.204 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=963790 00:25:39.204 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:39.204 [2024-06-10 10:51:03.485610] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.464 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:39.464 Malloc0 00:25:39.464 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:39.725 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:39.725 10:51:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.986 [2024-06-10 10:51:04.118887] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:39.986 [2024-06-10 10:51:04.119138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.986 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:39.986 [2024-06-10 10:51:04.271804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=964151 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 964151 /var/tmp/bdevperf.sock 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 964151 ']' 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:40.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:40.248 10:51:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:41.192 10:51:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:41.192 10:51:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:25:41.192 10:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:41.192 10:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:41.453 Nvme0n1 00:25:41.453 10:51:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:42.026 Nvme0n1 00:25:42.026 10:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:42.026 10:51:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:43.941 10:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:43.941 10:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:44.202 10:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:44.202 10:51:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.588 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.848 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.849 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.849 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.849 10:51:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:46.110 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.110 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:46.110 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.110 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:46.110 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.110 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:46.110 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.110 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.371 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.371 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:46.371 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:46.371 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.631 10:51:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:47.573 10:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:47.573 10:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:47.573 10:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.573 10:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.834 10:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:47.834 10:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:47.834 10:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.834 10:51:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.093 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.093 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.093 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.093 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.093 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.093 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.093 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.093 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.353 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.353 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:48.353 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.353 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.613 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.613 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.613 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.613 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.613 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.613 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:48.613 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.873 10:51:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:48.873 10:51:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.258 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.518 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.518 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.518 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.518 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.780 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.780 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.780 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.780 10:51:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.780 10:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.780 10:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.780 10:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.780 10:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.041 10:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.041 10:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:51.041 10:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.303 10:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:51.303 10:51:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:52.245 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:52.245 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:52.245 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.245 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.506 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.506 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:52.506 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.506 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.766 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.766 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.766 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.766 10:51:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.766 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.766 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.766 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.766 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.027 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.027 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.027 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.027 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.287 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.287 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:53.287 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.287 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.287 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.287 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:53.287 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:53.547 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:53.547 10:51:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:54.931 10:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:54.931 10:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:54.931 10:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.931 10:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.931 10:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.931 10:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:54.931 10:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.931 10:51:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.931 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.931 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.931 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.931 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.191 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.191 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:55.191 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.191 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.451 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.451 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:55.451 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.451 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:55.451 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.451 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:55.451 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.451 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.712 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.712 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:55.712 10:51:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:55.972 10:51:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.972 10:51:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:56.913 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:56.913 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:56.913 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.913 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:57.211 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.211 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:57.211 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.211 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:57.471 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.471 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:57.471 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.471 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.471 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.471 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.471 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.471 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.731 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.731 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:57.731 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.731 10:51:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.991 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:57.991 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.991 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.991 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:57.991 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.991 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:58.253 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:58.253 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:58.253 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:58.513 10:51:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:59.453 10:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:59.453 10:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:59.453 10:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.453 10:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.713 10:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.713 10:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:59.713 10:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.713 10:51:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:59.972 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.972 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:59.972 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.972 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:59.973 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.973 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:59.973 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.973 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.233 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.233 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.233 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.233 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.493 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.493 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:00.493 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.493 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:00.493 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.493 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:00.493 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:00.752 10:51:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:01.012 10:51:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:02.055 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:02.055 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:02.055 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.055 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:02.055 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:02.055 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:02.055 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.055 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:02.319 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.319 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:02.319 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.319 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:02.319 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.319 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:02.319 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.319 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:02.580 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.580 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:02.580 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.580 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:02.840 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.840 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:02.840 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.840 10:51:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.840 10:51:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.840 10:51:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:02.840 10:51:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:03.100 10:51:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:03.360 10:51:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:04.301 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:04.301 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.301 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.301 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.561 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.561 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:04.561 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.561 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:04.561 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.561 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:04.561 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.561 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:04.822 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.822 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:04.822 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.822 10:51:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.082 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.082 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.082 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.082 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.082 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.082 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:05.082 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.082 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:05.342 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.342 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:05.342 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:05.342 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:05.602 10:51:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:06.546 10:51:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:06.546 10:51:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:06.546 10:51:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.546 10:51:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.807 10:51:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.807 10:51:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:06.807 10:51:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.807 10:51:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:07.069 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.069 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:07.069 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.069 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:07.069 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.069 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:07.069 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.069 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.330 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.330 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.330 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.330 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 964151 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 964151 ']' 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 964151 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 964151 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 964151' 00:26:07.592 killing process with pid 964151 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 964151 00:26:07.592 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 964151 00:26:07.855 Connection closed with partial response: 00:26:07.855 00:26:07.855 00:26:07.855 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 964151 00:26:07.856 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:07.856 [2024-06-10 10:51:04.346444] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:26:07.856 [2024-06-10 10:51:04.346499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964151 ] 00:26:07.856 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.856 [2024-06-10 10:51:04.397034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.856 [2024-06-10 10:51:04.449306] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.856 Running I/O for 90 seconds... 00:26:07.856 [2024-06-10 10:51:17.661062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.856 [2024-06-10 10:51:17.661692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.856 [2024-06-10 10:51:17.661710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.856 [2024-06-10 10:51:17.661727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:07.856 [2024-06-10 10:51:17.661739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.661986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.661991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:07.857 [2024-06-10 10:51:17.662398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.857 [2024-06-10 10:51:17.662403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.662947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.662953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:07.858 [2024-06-10 10:51:17.663021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.858 [2024-06-10 10:51:17.663028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:17.663671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:17.663676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.769415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:29.769452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.769488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.769495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.769505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.769511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.769521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.769526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.769537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.769541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.769552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:29.769557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.769568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:29.769573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:29.770129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:29.770146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:29.770162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.770179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.770195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:29.770211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.859 [2024-06-10 10:51:29.770226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.770247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.770263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.770278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:07.859 [2024-06-10 10:51:29.770288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.859 [2024-06-10 10:51:29.770293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:07.860 [2024-06-10 10:51:29.770303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.860 [2024-06-10 10:51:29.770308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:07.860 [2024-06-10 10:51:29.770319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.860 [2024-06-10 10:51:29.770324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:07.860 [2024-06-10 10:51:29.770613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.860 [2024-06-10 10:51:29.770624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:07.860 [2024-06-10 10:51:29.770636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.860 [2024-06-10 10:51:29.770641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:07.860 [2024-06-10 10:51:29.770651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.860 [2024-06-10 10:51:29.770657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:07.860 [2024-06-10 10:51:29.770667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.860 [2024-06-10 10:51:29.770673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:07.860 [2024-06-10 10:51:29.770684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:07.860 [2024-06-10 10:51:29.770688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:07.860 Received shutdown signal, test time was about 25.611048 seconds 00:26:07.860 00:26:07.860 Latency(us) 00:26:07.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.860 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:07.860 Verification LBA range: start 0x0 length 0x4000 00:26:07.860 Nvme0n1 : 25.61 10983.92 42.91 0.00 0.00 11635.18 416.43 3019898.88 00:26:07.860 =================================================================================================================== 00:26:07.860 Total : 10983.92 42.91 0.00 0.00 11635.18 416.43 3019898.88 00:26:07.860 10:51:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:08.121 rmmod nvme_tcp 00:26:08.121 rmmod nvme_fabrics 00:26:08.121 rmmod nvme_keyring 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 963790 ']' 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 963790 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 963790 ']' 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 963790 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 963790 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 963790' 00:26:08.121 killing process with pid 963790 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 963790 00:26:08.121 [2024-06-10 10:51:32.280561] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:08.121 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 963790 00:26:08.382 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:08.382 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:08.382 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:08.382 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:08.382 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:08.382 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.382 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.382 10:51:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.298 10:51:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:10.298 00:26:10.298 real 0m39.412s 00:26:10.298 user 1m41.529s 00:26:10.298 sys 0m10.660s 00:26:10.298 10:51:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:10.298 10:51:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:10.298 ************************************ 00:26:10.298 END TEST nvmf_host_multipath_status 00:26:10.298 ************************************ 00:26:10.298 10:51:34 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:10.298 10:51:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:10.298 10:51:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:10.298 10:51:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:10.298 ************************************ 00:26:10.298 START TEST nvmf_discovery_remove_ifc 00:26:10.298 ************************************ 00:26:10.298 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:10.559 * Looking for test storage... 00:26:10.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:10.559 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:10.560 10:51:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:18.708 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:18.708 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.708 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:18.708 Found net devices under 0000:31:00.0: cvl_0_0 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:18.709 Found net devices under 0000:31:00.1: cvl_0_1 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:18.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:26:18.709 00:26:18.709 --- 10.0.0.2 ping statistics --- 00:26:18.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.709 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:18.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:26:18.709 00:26:18.709 --- 10.0.0.1 ping statistics --- 00:26:18.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.709 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=974209 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 974209 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 974209 ']' 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:18.709 10:51:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.709 [2024-06-10 10:51:41.947397] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:26:18.709 [2024-06-10 10:51:41.947454] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.709 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.709 [2024-06-10 10:51:42.033798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.709 [2024-06-10 10:51:42.128036] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.709 [2024-06-10 10:51:42.128095] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.709 [2024-06-10 10:51:42.128103] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.709 [2024-06-10 10:51:42.128110] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.709 [2024-06-10 10:51:42.128117] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.709 [2024-06-10 10:51:42.128151] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.709 [2024-06-10 10:51:42.786172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.709 [2024-06-10 10:51:42.794148] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:18.709 [2024-06-10 10:51:42.794413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:18.709 null0 00:26:18.709 [2024-06-10 10:51:42.826365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=974555 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 974555 /tmp/host.sock 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 974555 ']' 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:18.709 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:18.709 10:51:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.709 [2024-06-10 10:51:42.900548] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:26:18.709 [2024-06-10 10:51:42.900609] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid974555 ] 00:26:18.709 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.709 [2024-06-10 10:51:42.965647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.970 [2024-06-10 10:51:43.040712] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.542 10:51:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.927 [2024-06-10 10:51:44.794328] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:20.927 [2024-06-10 10:51:44.794351] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:20.927 [2024-06-10 10:51:44.794371] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:20.927 [2024-06-10 10:51:44.882657] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:20.927 [2024-06-10 10:51:45.069595] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:20.927 [2024-06-10 10:51:45.069646] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:20.927 [2024-06-10 10:51:45.069669] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:20.927 [2024-06-10 10:51:45.069683] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:20.927 [2024-06-10 10:51:45.069704] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.927 [2024-06-10 10:51:45.074327] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2142850 was disconnected and freed. delete nvme_qpair. 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:20.927 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:21.189 10:51:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:22.129 10:51:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:23.512 10:51:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:24.454 10:51:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:25.393 10:51:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:26.335 [2024-06-10 10:51:50.510033] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:26.335 [2024-06-10 10:51:50.510084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.335 [2024-06-10 10:51:50.510096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.335 [2024-06-10 10:51:50.510106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.335 [2024-06-10 10:51:50.510119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.335 [2024-06-10 10:51:50.510127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.335 [2024-06-10 10:51:50.510135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.335 [2024-06-10 10:51:50.510142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.335 [2024-06-10 10:51:50.510149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.335 [2024-06-10 10:51:50.510157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.335 [2024-06-10 10:51:50.510164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.335 [2024-06-10 10:51:50.510171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109bd0 is same with the state(5) to be set 00:26:26.335 [2024-06-10 10:51:50.520052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2109bd0 (9): Bad file descriptor 00:26:26.335 [2024-06-10 10:51:50.530095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:26.335 10:51:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.335 10:51:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.335 10:51:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.335 10:51:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.335 10:51:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.335 10:51:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.335 10:51:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:27.276 [2024-06-10 10:51:51.537283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:27.276 [2024-06-10 10:51:51.537334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2109bd0 with addr=10.0.0.2, port=4420 00:26:27.276 [2024-06-10 10:51:51.537350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109bd0 is same with the state(5) to be set 00:26:27.276 [2024-06-10 10:51:51.537381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2109bd0 (9): Bad file descriptor 00:26:27.276 [2024-06-10 10:51:51.537755] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:27.276 [2024-06-10 10:51:51.537775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:27.276 [2024-06-10 10:51:51.537783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:27.276 [2024-06-10 10:51:51.537793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:27.276 [2024-06-10 10:51:51.537811] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.276 [2024-06-10 10:51:51.537820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:27.276 10:51:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.276 10:51:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:27.276 10:51:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:28.658 [2024-06-10 10:51:52.540202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.658 [2024-06-10 10:51:52.540241] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:28.658 [2024-06-10 10:51:52.540275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.658 [2024-06-10 10:51:52.540286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.658 [2024-06-10 10:51:52.540297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.658 [2024-06-10 10:51:52.540304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.658 [2024-06-10 10:51:52.540313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.658 [2024-06-10 10:51:52.540320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.658 [2024-06-10 10:51:52.540328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.658 [2024-06-10 10:51:52.540336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.658 [2024-06-10 10:51:52.540344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.658 [2024-06-10 10:51:52.540351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.658 [2024-06-10 10:51:52.540359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:28.658 [2024-06-10 10:51:52.540833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2109060 (9): Bad file descriptor 00:26:28.658 [2024-06-10 10:51:52.541845] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:28.658 [2024-06-10 10:51:52.541856] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:28.658 10:51:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:29.598 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:29.598 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:29.598 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.599 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:29.599 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.599 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:29.599 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.599 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.599 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:29.599 10:51:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.539 [2024-06-10 10:51:54.601490] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:30.539 [2024-06-10 10:51:54.601508] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:30.539 [2024-06-10 10:51:54.601521] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:30.539 [2024-06-10 10:51:54.730924] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:30.799 10:51:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:30.799 [2024-06-10 10:51:54.953377] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:30.799 [2024-06-10 10:51:54.953417] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:30.799 [2024-06-10 10:51:54.953437] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:30.799 [2024-06-10 10:51:54.953452] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:30.799 [2024-06-10 10:51:54.953460] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:30.799 [2024-06-10 10:51:54.999001] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x21168f0 was disconnected and freed. delete nvme_qpair. 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 974555 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 974555 ']' 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 974555 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 974555 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 974555' 00:26:31.742 killing process with pid 974555 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 974555 00:26:31.742 10:51:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 974555 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:32.004 rmmod nvme_tcp 00:26:32.004 rmmod nvme_fabrics 00:26:32.004 rmmod nvme_keyring 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 974209 ']' 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 974209 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 974209 ']' 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 974209 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 974209 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 974209' 00:26:32.004 killing process with pid 974209 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 974209 00:26:32.004 [2024-06-10 10:51:56.244234] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:32.004 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 974209 00:26:32.267 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:32.267 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:32.267 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:32.267 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.267 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.267 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.267 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.267 10:51:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.183 10:51:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:34.183 00:26:34.183 real 0m23.852s 00:26:34.183 user 0m29.165s 00:26:34.183 sys 0m6.643s 00:26:34.183 10:51:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:34.183 10:51:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.183 ************************************ 00:26:34.183 END TEST nvmf_discovery_remove_ifc 00:26:34.183 ************************************ 00:26:34.445 10:51:58 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:34.445 10:51:58 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:34.445 10:51:58 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:34.445 10:51:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:34.445 ************************************ 00:26:34.445 START TEST nvmf_identify_kernel_target 00:26:34.445 ************************************ 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:34.445 * Looking for test storage... 00:26:34.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:34.445 10:51:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:42.739 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.739 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:42.740 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:42.740 Found net devices under 0000:31:00.0: cvl_0_0 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:42.740 Found net devices under 0000:31:00.1: cvl_0_1 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:42.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.898 ms 00:26:42.740 00:26:42.740 --- 10.0.0.2 ping statistics --- 00:26:42.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.740 rtt min/avg/max/mdev = 0.898/0.898/0.898/0.000 ms 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:26:42.740 00:26:42.740 --- 10.0.0.1 ping statistics --- 00:26:42.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.740 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:42.740 10:52:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:42.740 10:52:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:45.289 Waiting for block devices as requested 00:26:45.289 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:45.549 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:45.549 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:45.549 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:45.549 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:45.810 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:45.810 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:45.810 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:46.071 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:46.071 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:46.332 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:46.332 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:46.332 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:46.332 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:46.593 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:46.593 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:46.593 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:46.593 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:46.593 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:46.593 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:46.593 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:26:46.593 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:46.593 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:26:46.593 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:46.593 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:46.593 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:46.593 No valid GPT data, bailing 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:26:46.854 00:26:46.854 Discovery Log Number of Records 2, Generation counter 2 00:26:46.854 =====Discovery Log Entry 0====== 00:26:46.854 trtype: tcp 00:26:46.854 adrfam: ipv4 00:26:46.854 subtype: current discovery subsystem 00:26:46.854 treq: not specified, sq flow control disable supported 00:26:46.854 portid: 1 00:26:46.854 trsvcid: 4420 00:26:46.854 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:46.854 traddr: 10.0.0.1 00:26:46.854 eflags: none 00:26:46.854 sectype: none 00:26:46.854 =====Discovery Log Entry 1====== 00:26:46.854 trtype: tcp 00:26:46.854 adrfam: ipv4 00:26:46.854 subtype: nvme subsystem 00:26:46.854 treq: not specified, sq flow control disable supported 00:26:46.854 portid: 1 00:26:46.854 trsvcid: 4420 00:26:46.854 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:46.854 traddr: 10.0.0.1 00:26:46.854 eflags: none 00:26:46.854 sectype: none 00:26:46.854 10:52:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:46.854 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:46.855 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.855 ===================================================== 00:26:46.855 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:46.855 ===================================================== 00:26:46.855 Controller Capabilities/Features 00:26:46.855 ================================ 00:26:46.855 Vendor ID: 0000 00:26:46.855 Subsystem Vendor ID: 0000 00:26:46.855 Serial Number: dbcdb159ae33f2114a2f 00:26:46.855 Model Number: Linux 00:26:46.855 Firmware Version: 6.7.0-68 00:26:46.855 Recommended Arb Burst: 0 00:26:46.855 IEEE OUI Identifier: 00 00 00 00:26:46.855 Multi-path I/O 00:26:46.855 May have multiple subsystem ports: No 00:26:46.855 May have multiple controllers: No 00:26:46.855 Associated with SR-IOV VF: No 00:26:46.855 Max Data Transfer Size: Unlimited 00:26:46.855 Max Number of Namespaces: 0 00:26:46.855 Max Number of I/O Queues: 1024 00:26:46.855 NVMe Specification Version (VS): 1.3 00:26:46.855 NVMe Specification Version (Identify): 1.3 00:26:46.855 Maximum Queue Entries: 1024 00:26:46.855 Contiguous Queues Required: No 00:26:46.855 Arbitration Mechanisms Supported 00:26:46.855 Weighted Round Robin: Not Supported 00:26:46.855 Vendor Specific: Not Supported 00:26:46.855 Reset Timeout: 7500 ms 00:26:46.855 Doorbell Stride: 4 bytes 00:26:46.855 NVM Subsystem Reset: Not Supported 00:26:46.855 Command Sets Supported 00:26:46.855 NVM Command Set: Supported 00:26:46.855 Boot Partition: Not Supported 00:26:46.855 Memory Page Size Minimum: 4096 bytes 00:26:46.855 Memory Page Size Maximum: 4096 bytes 00:26:46.855 Persistent Memory Region: Not Supported 00:26:46.855 Optional Asynchronous Events Supported 00:26:46.855 Namespace Attribute Notices: Not Supported 00:26:46.855 Firmware Activation Notices: Not Supported 00:26:46.855 ANA Change Notices: Not Supported 00:26:46.855 PLE Aggregate Log Change Notices: Not Supported 00:26:46.855 LBA Status Info Alert Notices: Not Supported 00:26:46.855 EGE Aggregate Log Change Notices: Not Supported 00:26:46.855 Normal NVM Subsystem Shutdown event: Not Supported 00:26:46.855 Zone Descriptor Change Notices: Not Supported 00:26:46.855 Discovery Log Change Notices: Supported 00:26:46.855 Controller Attributes 00:26:46.855 128-bit Host Identifier: Not Supported 00:26:46.855 Non-Operational Permissive Mode: Not Supported 00:26:46.855 NVM Sets: Not Supported 00:26:46.855 Read Recovery Levels: Not Supported 00:26:46.855 Endurance Groups: Not Supported 00:26:46.855 Predictable Latency Mode: Not Supported 00:26:46.855 Traffic Based Keep ALive: Not Supported 00:26:46.855 Namespace Granularity: Not Supported 00:26:46.855 SQ Associations: Not Supported 00:26:46.855 UUID List: Not Supported 00:26:46.855 Multi-Domain Subsystem: Not Supported 00:26:46.855 Fixed Capacity Management: Not Supported 00:26:46.855 Variable Capacity Management: Not Supported 00:26:46.855 Delete Endurance Group: Not Supported 00:26:46.855 Delete NVM Set: Not Supported 00:26:46.855 Extended LBA Formats Supported: Not Supported 00:26:46.855 Flexible Data Placement Supported: Not Supported 00:26:46.855 00:26:46.855 Controller Memory Buffer Support 00:26:46.855 ================================ 00:26:46.855 Supported: No 00:26:46.855 00:26:46.855 Persistent Memory Region Support 00:26:46.855 ================================ 00:26:46.855 Supported: No 00:26:46.855 00:26:46.855 Admin Command Set Attributes 00:26:46.855 ============================ 00:26:46.855 Security Send/Receive: Not Supported 00:26:46.855 Format NVM: Not Supported 00:26:46.855 Firmware Activate/Download: Not Supported 00:26:46.855 Namespace Management: Not Supported 00:26:46.855 Device Self-Test: Not Supported 00:26:46.855 Directives: Not Supported 00:26:46.855 NVMe-MI: Not Supported 00:26:46.855 Virtualization Management: Not Supported 00:26:46.855 Doorbell Buffer Config: Not Supported 00:26:46.855 Get LBA Status Capability: Not Supported 00:26:46.855 Command & Feature Lockdown Capability: Not Supported 00:26:46.855 Abort Command Limit: 1 00:26:46.855 Async Event Request Limit: 1 00:26:46.855 Number of Firmware Slots: N/A 00:26:46.855 Firmware Slot 1 Read-Only: N/A 00:26:46.855 Firmware Activation Without Reset: N/A 00:26:46.855 Multiple Update Detection Support: N/A 00:26:46.855 Firmware Update Granularity: No Information Provided 00:26:46.855 Per-Namespace SMART Log: No 00:26:46.855 Asymmetric Namespace Access Log Page: Not Supported 00:26:46.855 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:46.855 Command Effects Log Page: Not Supported 00:26:46.855 Get Log Page Extended Data: Supported 00:26:46.855 Telemetry Log Pages: Not Supported 00:26:46.855 Persistent Event Log Pages: Not Supported 00:26:46.855 Supported Log Pages Log Page: May Support 00:26:46.855 Commands Supported & Effects Log Page: Not Supported 00:26:46.855 Feature Identifiers & Effects Log Page:May Support 00:26:46.855 NVMe-MI Commands & Effects Log Page: May Support 00:26:46.855 Data Area 4 for Telemetry Log: Not Supported 00:26:46.855 Error Log Page Entries Supported: 1 00:26:46.855 Keep Alive: Not Supported 00:26:46.855 00:26:46.855 NVM Command Set Attributes 00:26:46.855 ========================== 00:26:46.855 Submission Queue Entry Size 00:26:46.855 Max: 1 00:26:46.855 Min: 1 00:26:46.855 Completion Queue Entry Size 00:26:46.855 Max: 1 00:26:46.855 Min: 1 00:26:46.855 Number of Namespaces: 0 00:26:46.855 Compare Command: Not Supported 00:26:46.855 Write Uncorrectable Command: Not Supported 00:26:46.855 Dataset Management Command: Not Supported 00:26:46.855 Write Zeroes Command: Not Supported 00:26:46.855 Set Features Save Field: Not Supported 00:26:46.855 Reservations: Not Supported 00:26:46.855 Timestamp: Not Supported 00:26:46.855 Copy: Not Supported 00:26:46.855 Volatile Write Cache: Not Present 00:26:46.855 Atomic Write Unit (Normal): 1 00:26:46.855 Atomic Write Unit (PFail): 1 00:26:46.855 Atomic Compare & Write Unit: 1 00:26:46.855 Fused Compare & Write: Not Supported 00:26:46.855 Scatter-Gather List 00:26:46.855 SGL Command Set: Supported 00:26:46.855 SGL Keyed: Not Supported 00:26:46.855 SGL Bit Bucket Descriptor: Not Supported 00:26:46.855 SGL Metadata Pointer: Not Supported 00:26:46.855 Oversized SGL: Not Supported 00:26:46.855 SGL Metadata Address: Not Supported 00:26:46.855 SGL Offset: Supported 00:26:46.855 Transport SGL Data Block: Not Supported 00:26:46.855 Replay Protected Memory Block: Not Supported 00:26:46.855 00:26:46.855 Firmware Slot Information 00:26:46.855 ========================= 00:26:46.855 Active slot: 0 00:26:46.855 00:26:46.855 00:26:46.855 Error Log 00:26:46.855 ========= 00:26:46.855 00:26:46.855 Active Namespaces 00:26:46.855 ================= 00:26:46.855 Discovery Log Page 00:26:46.855 ================== 00:26:46.855 Generation Counter: 2 00:26:46.855 Number of Records: 2 00:26:46.855 Record Format: 0 00:26:46.855 00:26:46.855 Discovery Log Entry 0 00:26:46.855 ---------------------- 00:26:46.855 Transport Type: 3 (TCP) 00:26:46.855 Address Family: 1 (IPv4) 00:26:46.855 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:46.855 Entry Flags: 00:26:46.855 Duplicate Returned Information: 0 00:26:46.855 Explicit Persistent Connection Support for Discovery: 0 00:26:46.855 Transport Requirements: 00:26:46.855 Secure Channel: Not Specified 00:26:46.855 Port ID: 1 (0x0001) 00:26:46.855 Controller ID: 65535 (0xffff) 00:26:46.855 Admin Max SQ Size: 32 00:26:46.855 Transport Service Identifier: 4420 00:26:46.855 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:46.855 Transport Address: 10.0.0.1 00:26:46.855 Discovery Log Entry 1 00:26:46.855 ---------------------- 00:26:46.855 Transport Type: 3 (TCP) 00:26:46.855 Address Family: 1 (IPv4) 00:26:46.855 Subsystem Type: 2 (NVM Subsystem) 00:26:46.855 Entry Flags: 00:26:46.855 Duplicate Returned Information: 0 00:26:46.855 Explicit Persistent Connection Support for Discovery: 0 00:26:46.855 Transport Requirements: 00:26:46.855 Secure Channel: Not Specified 00:26:46.855 Port ID: 1 (0x0001) 00:26:46.855 Controller ID: 65535 (0xffff) 00:26:46.855 Admin Max SQ Size: 32 00:26:46.855 Transport Service Identifier: 4420 00:26:46.855 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:46.855 Transport Address: 10.0.0.1 00:26:46.855 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:46.855 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.855 get_feature(0x01) failed 00:26:46.855 get_feature(0x02) failed 00:26:46.855 get_feature(0x04) failed 00:26:46.855 ===================================================== 00:26:46.855 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:46.856 ===================================================== 00:26:46.856 Controller Capabilities/Features 00:26:46.856 ================================ 00:26:46.856 Vendor ID: 0000 00:26:46.856 Subsystem Vendor ID: 0000 00:26:46.856 Serial Number: c52abdf251a5d2b1a7e2 00:26:46.856 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:46.856 Firmware Version: 6.7.0-68 00:26:46.856 Recommended Arb Burst: 6 00:26:46.856 IEEE OUI Identifier: 00 00 00 00:26:46.856 Multi-path I/O 00:26:46.856 May have multiple subsystem ports: Yes 00:26:46.856 May have multiple controllers: Yes 00:26:46.856 Associated with SR-IOV VF: No 00:26:46.856 Max Data Transfer Size: Unlimited 00:26:46.856 Max Number of Namespaces: 1024 00:26:46.856 Max Number of I/O Queues: 128 00:26:46.856 NVMe Specification Version (VS): 1.3 00:26:46.856 NVMe Specification Version (Identify): 1.3 00:26:46.856 Maximum Queue Entries: 1024 00:26:46.856 Contiguous Queues Required: No 00:26:46.856 Arbitration Mechanisms Supported 00:26:46.856 Weighted Round Robin: Not Supported 00:26:46.856 Vendor Specific: Not Supported 00:26:46.856 Reset Timeout: 7500 ms 00:26:46.856 Doorbell Stride: 4 bytes 00:26:46.856 NVM Subsystem Reset: Not Supported 00:26:46.856 Command Sets Supported 00:26:46.856 NVM Command Set: Supported 00:26:46.856 Boot Partition: Not Supported 00:26:46.856 Memory Page Size Minimum: 4096 bytes 00:26:46.856 Memory Page Size Maximum: 4096 bytes 00:26:46.856 Persistent Memory Region: Not Supported 00:26:46.856 Optional Asynchronous Events Supported 00:26:46.856 Namespace Attribute Notices: Supported 00:26:46.856 Firmware Activation Notices: Not Supported 00:26:46.856 ANA Change Notices: Supported 00:26:46.856 PLE Aggregate Log Change Notices: Not Supported 00:26:46.856 LBA Status Info Alert Notices: Not Supported 00:26:46.856 EGE Aggregate Log Change Notices: Not Supported 00:26:46.856 Normal NVM Subsystem Shutdown event: Not Supported 00:26:46.856 Zone Descriptor Change Notices: Not Supported 00:26:46.856 Discovery Log Change Notices: Not Supported 00:26:46.856 Controller Attributes 00:26:46.856 128-bit Host Identifier: Supported 00:26:46.856 Non-Operational Permissive Mode: Not Supported 00:26:46.856 NVM Sets: Not Supported 00:26:46.856 Read Recovery Levels: Not Supported 00:26:46.856 Endurance Groups: Not Supported 00:26:46.856 Predictable Latency Mode: Not Supported 00:26:46.856 Traffic Based Keep ALive: Supported 00:26:46.856 Namespace Granularity: Not Supported 00:26:46.856 SQ Associations: Not Supported 00:26:46.856 UUID List: Not Supported 00:26:46.856 Multi-Domain Subsystem: Not Supported 00:26:46.856 Fixed Capacity Management: Not Supported 00:26:46.856 Variable Capacity Management: Not Supported 00:26:46.856 Delete Endurance Group: Not Supported 00:26:46.856 Delete NVM Set: Not Supported 00:26:46.856 Extended LBA Formats Supported: Not Supported 00:26:46.856 Flexible Data Placement Supported: Not Supported 00:26:46.856 00:26:46.856 Controller Memory Buffer Support 00:26:46.856 ================================ 00:26:46.856 Supported: No 00:26:46.856 00:26:46.856 Persistent Memory Region Support 00:26:46.856 ================================ 00:26:46.856 Supported: No 00:26:46.856 00:26:46.856 Admin Command Set Attributes 00:26:46.856 ============================ 00:26:46.856 Security Send/Receive: Not Supported 00:26:46.856 Format NVM: Not Supported 00:26:46.856 Firmware Activate/Download: Not Supported 00:26:46.856 Namespace Management: Not Supported 00:26:46.856 Device Self-Test: Not Supported 00:26:46.856 Directives: Not Supported 00:26:46.856 NVMe-MI: Not Supported 00:26:46.856 Virtualization Management: Not Supported 00:26:46.856 Doorbell Buffer Config: Not Supported 00:26:46.856 Get LBA Status Capability: Not Supported 00:26:46.856 Command & Feature Lockdown Capability: Not Supported 00:26:46.856 Abort Command Limit: 4 00:26:46.856 Async Event Request Limit: 4 00:26:46.856 Number of Firmware Slots: N/A 00:26:46.856 Firmware Slot 1 Read-Only: N/A 00:26:46.856 Firmware Activation Without Reset: N/A 00:26:46.856 Multiple Update Detection Support: N/A 00:26:46.856 Firmware Update Granularity: No Information Provided 00:26:46.856 Per-Namespace SMART Log: Yes 00:26:46.856 Asymmetric Namespace Access Log Page: Supported 00:26:46.856 ANA Transition Time : 10 sec 00:26:46.856 00:26:46.856 Asymmetric Namespace Access Capabilities 00:26:46.856 ANA Optimized State : Supported 00:26:46.856 ANA Non-Optimized State : Supported 00:26:46.856 ANA Inaccessible State : Supported 00:26:46.856 ANA Persistent Loss State : Supported 00:26:46.856 ANA Change State : Supported 00:26:46.856 ANAGRPID is not changed : No 00:26:46.856 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:46.856 00:26:46.856 ANA Group Identifier Maximum : 128 00:26:46.856 Number of ANA Group Identifiers : 128 00:26:46.856 Max Number of Allowed Namespaces : 1024 00:26:46.856 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:46.856 Command Effects Log Page: Supported 00:26:46.856 Get Log Page Extended Data: Supported 00:26:46.856 Telemetry Log Pages: Not Supported 00:26:46.856 Persistent Event Log Pages: Not Supported 00:26:46.856 Supported Log Pages Log Page: May Support 00:26:46.856 Commands Supported & Effects Log Page: Not Supported 00:26:46.856 Feature Identifiers & Effects Log Page:May Support 00:26:46.856 NVMe-MI Commands & Effects Log Page: May Support 00:26:46.856 Data Area 4 for Telemetry Log: Not Supported 00:26:46.856 Error Log Page Entries Supported: 128 00:26:46.856 Keep Alive: Supported 00:26:46.856 Keep Alive Granularity: 1000 ms 00:26:46.856 00:26:46.856 NVM Command Set Attributes 00:26:46.856 ========================== 00:26:46.856 Submission Queue Entry Size 00:26:46.856 Max: 64 00:26:46.856 Min: 64 00:26:46.856 Completion Queue Entry Size 00:26:46.856 Max: 16 00:26:46.856 Min: 16 00:26:46.856 Number of Namespaces: 1024 00:26:46.856 Compare Command: Not Supported 00:26:46.856 Write Uncorrectable Command: Not Supported 00:26:46.856 Dataset Management Command: Supported 00:26:46.856 Write Zeroes Command: Supported 00:26:46.856 Set Features Save Field: Not Supported 00:26:46.856 Reservations: Not Supported 00:26:46.856 Timestamp: Not Supported 00:26:46.856 Copy: Not Supported 00:26:46.856 Volatile Write Cache: Present 00:26:46.856 Atomic Write Unit (Normal): 1 00:26:46.856 Atomic Write Unit (PFail): 1 00:26:46.856 Atomic Compare & Write Unit: 1 00:26:46.856 Fused Compare & Write: Not Supported 00:26:46.856 Scatter-Gather List 00:26:46.856 SGL Command Set: Supported 00:26:46.856 SGL Keyed: Not Supported 00:26:46.856 SGL Bit Bucket Descriptor: Not Supported 00:26:46.856 SGL Metadata Pointer: Not Supported 00:26:46.856 Oversized SGL: Not Supported 00:26:46.856 SGL Metadata Address: Not Supported 00:26:46.856 SGL Offset: Supported 00:26:46.856 Transport SGL Data Block: Not Supported 00:26:46.856 Replay Protected Memory Block: Not Supported 00:26:46.856 00:26:46.856 Firmware Slot Information 00:26:46.856 ========================= 00:26:46.856 Active slot: 0 00:26:46.856 00:26:46.856 Asymmetric Namespace Access 00:26:46.856 =========================== 00:26:46.856 Change Count : 0 00:26:46.856 Number of ANA Group Descriptors : 1 00:26:46.856 ANA Group Descriptor : 0 00:26:46.856 ANA Group ID : 1 00:26:46.856 Number of NSID Values : 1 00:26:46.856 Change Count : 0 00:26:46.856 ANA State : 1 00:26:46.856 Namespace Identifier : 1 00:26:46.856 00:26:46.856 Commands Supported and Effects 00:26:46.856 ============================== 00:26:46.856 Admin Commands 00:26:46.856 -------------- 00:26:46.856 Get Log Page (02h): Supported 00:26:46.856 Identify (06h): Supported 00:26:46.856 Abort (08h): Supported 00:26:46.856 Set Features (09h): Supported 00:26:46.856 Get Features (0Ah): Supported 00:26:46.856 Asynchronous Event Request (0Ch): Supported 00:26:46.856 Keep Alive (18h): Supported 00:26:46.856 I/O Commands 00:26:46.856 ------------ 00:26:46.856 Flush (00h): Supported 00:26:46.856 Write (01h): Supported LBA-Change 00:26:46.856 Read (02h): Supported 00:26:46.856 Write Zeroes (08h): Supported LBA-Change 00:26:46.856 Dataset Management (09h): Supported 00:26:46.856 00:26:46.856 Error Log 00:26:46.856 ========= 00:26:46.856 Entry: 0 00:26:46.856 Error Count: 0x3 00:26:46.856 Submission Queue Id: 0x0 00:26:46.856 Command Id: 0x5 00:26:46.856 Phase Bit: 0 00:26:46.856 Status Code: 0x2 00:26:46.856 Status Code Type: 0x0 00:26:46.856 Do Not Retry: 1 00:26:46.856 Error Location: 0x28 00:26:46.856 LBA: 0x0 00:26:46.856 Namespace: 0x0 00:26:46.856 Vendor Log Page: 0x0 00:26:46.856 ----------- 00:26:46.856 Entry: 1 00:26:46.856 Error Count: 0x2 00:26:46.857 Submission Queue Id: 0x0 00:26:46.857 Command Id: 0x5 00:26:46.857 Phase Bit: 0 00:26:46.857 Status Code: 0x2 00:26:46.857 Status Code Type: 0x0 00:26:46.857 Do Not Retry: 1 00:26:46.857 Error Location: 0x28 00:26:46.857 LBA: 0x0 00:26:46.857 Namespace: 0x0 00:26:46.857 Vendor Log Page: 0x0 00:26:46.857 ----------- 00:26:46.857 Entry: 2 00:26:46.857 Error Count: 0x1 00:26:46.857 Submission Queue Id: 0x0 00:26:46.857 Command Id: 0x4 00:26:46.857 Phase Bit: 0 00:26:46.857 Status Code: 0x2 00:26:46.857 Status Code Type: 0x0 00:26:46.857 Do Not Retry: 1 00:26:46.857 Error Location: 0x28 00:26:46.857 LBA: 0x0 00:26:46.857 Namespace: 0x0 00:26:46.857 Vendor Log Page: 0x0 00:26:46.857 00:26:46.857 Number of Queues 00:26:46.857 ================ 00:26:46.857 Number of I/O Submission Queues: 128 00:26:46.857 Number of I/O Completion Queues: 128 00:26:46.857 00:26:46.857 ZNS Specific Controller Data 00:26:46.857 ============================ 00:26:46.857 Zone Append Size Limit: 0 00:26:46.857 00:26:46.857 00:26:46.857 Active Namespaces 00:26:46.857 ================= 00:26:46.857 get_feature(0x05) failed 00:26:46.857 Namespace ID:1 00:26:46.857 Command Set Identifier: NVM (00h) 00:26:46.857 Deallocate: Supported 00:26:46.857 Deallocated/Unwritten Error: Not Supported 00:26:46.857 Deallocated Read Value: Unknown 00:26:46.857 Deallocate in Write Zeroes: Not Supported 00:26:46.857 Deallocated Guard Field: 0xFFFF 00:26:46.857 Flush: Supported 00:26:46.857 Reservation: Not Supported 00:26:46.857 Namespace Sharing Capabilities: Multiple Controllers 00:26:46.857 Size (in LBAs): 3750748848 (1788GiB) 00:26:46.857 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:46.857 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:46.857 UUID: e11fc8d5-ac5c-4223-8eac-035d1e0cb909 00:26:46.857 Thin Provisioning: Not Supported 00:26:46.857 Per-NS Atomic Units: Yes 00:26:46.857 Atomic Write Unit (Normal): 8 00:26:46.857 Atomic Write Unit (PFail): 8 00:26:46.857 Preferred Write Granularity: 8 00:26:46.857 Atomic Compare & Write Unit: 8 00:26:46.857 Atomic Boundary Size (Normal): 0 00:26:46.857 Atomic Boundary Size (PFail): 0 00:26:46.857 Atomic Boundary Offset: 0 00:26:46.857 NGUID/EUI64 Never Reused: No 00:26:46.857 ANA group ID: 1 00:26:46.857 Namespace Write Protected: No 00:26:46.857 Number of LBA Formats: 1 00:26:46.857 Current LBA Format: LBA Format #00 00:26:46.857 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:46.857 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.857 rmmod nvme_tcp 00:26:46.857 rmmod nvme_fabrics 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.857 10:52:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:49.402 10:52:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:52.705 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:52.705 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:52.705 00:26:52.705 real 0m18.418s 00:26:52.705 user 0m4.967s 00:26:52.705 sys 0m10.529s 00:26:52.705 10:52:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:52.705 10:52:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:52.705 ************************************ 00:26:52.705 END TEST nvmf_identify_kernel_target 00:26:52.705 ************************************ 00:26:52.705 10:52:16 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:52.705 10:52:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:52.705 10:52:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:52.705 10:52:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:52.967 ************************************ 00:26:52.967 START TEST nvmf_auth_host 00:26:52.967 ************************************ 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:52.967 * Looking for test storage... 00:26:52.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.967 10:52:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:52.968 10:52:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:01.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:01.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:01.109 Found net devices under 0000:31:00.0: cvl_0_0 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:01.109 Found net devices under 0000:31:00.1: cvl_0_1 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.109 10:52:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:01.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:27:01.109 00:27:01.109 --- 10.0.0.2 ping statistics --- 00:27:01.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.109 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:27:01.109 00:27:01.109 --- 10.0.0.1 ping statistics --- 00:27:01.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.109 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=988898 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 988898 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 988898 ']' 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:01.109 10:52:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b4c39f719bf735b65e0fa2d6a563cec5 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LIk 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b4c39f719bf735b65e0fa2d6a563cec5 0 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b4c39f719bf735b65e0fa2d6a563cec5 0 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b4c39f719bf735b65e0fa2d6a563cec5 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LIk 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LIk 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.LIk 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bc66e5f87e2717d7c1ed23b201c5b5fa0391b3be4bb627a382d82a0276362c8b 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0Rs 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bc66e5f87e2717d7c1ed23b201c5b5fa0391b3be4bb627a382d82a0276362c8b 3 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bc66e5f87e2717d7c1ed23b201c5b5fa0391b3be4bb627a382d82a0276362c8b 3 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bc66e5f87e2717d7c1ed23b201c5b5fa0391b3be4bb627a382d82a0276362c8b 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0Rs 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0Rs 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0Rs 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d3a1ac5275d09e7bdef0531aa2832ab0458176b6ab06388 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XGn 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d3a1ac5275d09e7bdef0531aa2832ab0458176b6ab06388 0 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d3a1ac5275d09e7bdef0531aa2832ab0458176b6ab06388 0 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d3a1ac5275d09e7bdef0531aa2832ab0458176b6ab06388 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XGn 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XGn 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.XGn 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c7b601d7beab3a235bd4e7ef169651d0f92874b2897df254 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jhS 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c7b601d7beab3a235bd4e7ef169651d0f92874b2897df254 2 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c7b601d7beab3a235bd4e7ef169651d0f92874b2897df254 2 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c7b601d7beab3a235bd4e7ef169651d0f92874b2897df254 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:01.109 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jhS 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jhS 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jhS 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=143520872e57ab153a60ffc9a1df1781 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aUt 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 143520872e57ab153a60ffc9a1df1781 1 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 143520872e57ab153a60ffc9a1df1781 1 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=143520872e57ab153a60ffc9a1df1781 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aUt 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aUt 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.aUt 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1753c044126e55c3e97bf86f30d3292f 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eZz 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1753c044126e55c3e97bf86f30d3292f 1 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1753c044126e55c3e97bf86f30d3292f 1 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1753c044126e55c3e97bf86f30d3292f 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eZz 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eZz 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.eZz 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c897c0483042064b4355ef2f7b0c1b60559b5e3e86ded336 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gOg 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c897c0483042064b4355ef2f7b0c1b60559b5e3e86ded336 2 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c897c0483042064b4355ef2f7b0c1b60559b5e3e86ded336 2 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c897c0483042064b4355ef2f7b0c1b60559b5e3e86ded336 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gOg 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gOg 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.gOg 00:27:01.370 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=769caca202aacf3b46f99d98aa37ea6f 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sDa 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 769caca202aacf3b46f99d98aa37ea6f 0 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 769caca202aacf3b46f99d98aa37ea6f 0 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=769caca202aacf3b46f99d98aa37ea6f 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:01.371 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sDa 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sDa 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sDa 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5513deb7f66ffb8a904808742ec731701d1c9e4cf1cff8770264fac7779bd94b 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0oQ 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5513deb7f66ffb8a904808742ec731701d1c9e4cf1cff8770264fac7779bd94b 3 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5513deb7f66ffb8a904808742ec731701d1c9e4cf1cff8770264fac7779bd94b 3 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5513deb7f66ffb8a904808742ec731701d1c9e4cf1cff8770264fac7779bd94b 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0oQ 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0oQ 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.0oQ 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 988898 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 988898 ']' 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LIk 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.631 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0Rs ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0Rs 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.XGn 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jhS ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jhS 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.aUt 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.eZz ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eZz 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.gOg 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sDa ]] 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sDa 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.892 10:52:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.892 10:52:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.892 10:52:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.0oQ 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:01.893 10:52:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:05.193 Waiting for block devices as requested 00:27:05.193 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:05.193 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:05.193 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:05.453 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:05.453 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:05.453 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:05.714 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:05.714 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:05.714 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:05.975 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:05.975 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:05.975 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:06.236 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:06.236 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:06.236 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:06.236 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:06.497 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:07.070 No valid GPT data, bailing 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:07.070 00:27:07.070 Discovery Log Number of Records 2, Generation counter 2 00:27:07.070 =====Discovery Log Entry 0====== 00:27:07.070 trtype: tcp 00:27:07.070 adrfam: ipv4 00:27:07.070 subtype: current discovery subsystem 00:27:07.070 treq: not specified, sq flow control disable supported 00:27:07.070 portid: 1 00:27:07.070 trsvcid: 4420 00:27:07.070 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:07.070 traddr: 10.0.0.1 00:27:07.070 eflags: none 00:27:07.070 sectype: none 00:27:07.070 =====Discovery Log Entry 1====== 00:27:07.070 trtype: tcp 00:27:07.070 adrfam: ipv4 00:27:07.070 subtype: nvme subsystem 00:27:07.070 treq: not specified, sq flow control disable supported 00:27:07.070 portid: 1 00:27:07.070 trsvcid: 4420 00:27:07.070 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:07.070 traddr: 10.0.0.1 00:27:07.070 eflags: none 00:27:07.070 sectype: none 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:07.070 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.071 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.332 nvme0n1 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.332 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.333 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.594 nvme0n1 00:27:07.594 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.594 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.595 nvme0n1 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.595 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.856 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.857 10:52:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.857 nvme0n1 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:07.857 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.118 nvme0n1 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.118 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.379 nvme0n1 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.379 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.640 nvme0n1 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.640 10:52:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.901 nvme0n1 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.901 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.162 nvme0n1 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.162 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.423 nvme0n1 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.423 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.424 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.684 nvme0n1 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.684 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.685 10:52:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.945 nvme0n1 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.945 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:10.205 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.206 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.466 nvme0n1 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.466 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.467 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.728 nvme0n1 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.728 10:52:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.990 nvme0n1 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:10.990 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.250 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.251 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.512 nvme0n1 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:11.512 10:52:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.084 nvme0n1 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.084 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.085 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.085 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.085 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.345 nvme0n1 00:27:12.345 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.345 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.345 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.345 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.345 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.345 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.606 10:52:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.867 nvme0n1 00:27:12.867 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.867 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.867 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.867 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.867 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.128 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.390 nvme0n1 00:27:13.390 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.390 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.390 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.390 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.390 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.651 10:52:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.913 nvme0n1 00:27:13.913 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.913 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.913 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.913 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.913 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.174 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.746 nvme0n1 00:27:14.746 10:52:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:14.746 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.746 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.746 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:14.746 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.746 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.007 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.007 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.007 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.007 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.007 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.007 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.007 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.008 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.584 nvme0n1 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.584 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:15.848 10:52:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.419 nvme0n1 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.419 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:16.682 10:52:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.280 nvme0n1 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.280 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.281 10:52:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.220 nvme0n1 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.220 nvme0n1 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.220 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.480 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.480 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.480 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.480 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.480 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.481 nvme0n1 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.481 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.742 nvme0n1 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:18.742 10:52:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.003 nvme0n1 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.003 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.264 nvme0n1 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.264 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.525 nvme0n1 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.525 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.785 nvme0n1 00:27:19.785 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:19.786 10:52:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.046 nvme0n1 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.046 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.307 nvme0n1 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.307 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.568 nvme0n1 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.568 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.831 nvme0n1 00:27:20.831 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.831 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.831 10:52:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.831 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.831 10:52:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.831 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.832 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:20.832 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.092 nvme0n1 00:27:21.092 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.092 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.092 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.092 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.092 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.092 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.352 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.352 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.352 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.352 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.352 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.352 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.352 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:21.352 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.353 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.613 nvme0n1 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.613 10:52:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.874 nvme0n1 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.874 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:21.875 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.135 nvme0n1 00:27:22.135 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.135 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.135 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.135 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.135 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.135 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.395 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.395 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.395 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.395 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.395 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.395 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.395 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.396 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.656 nvme0n1 00:27:22.656 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.656 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.656 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.656 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.656 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.656 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:22.916 10:52:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.176 nvme0n1 00:27:23.176 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.176 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.176 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.176 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.176 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.176 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.436 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.697 nvme0n1 00:27:23.697 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.697 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.697 10:52:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.697 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.697 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.697 10:52:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.958 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.219 nvme0n1 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.219 10:52:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.220 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.220 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.480 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.740 nvme0n1 00:27:24.740 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.740 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.740 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.740 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.740 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.740 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.740 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.740 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.740 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.741 10:52:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:24.741 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.683 nvme0n1 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.683 10:52:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.626 nvme0n1 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.626 10:52:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.197 nvme0n1 00:27:27.197 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.197 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.197 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.197 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.197 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:27.198 10:52:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.139 nvme0n1 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.140 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.710 nvme0n1 00:27:28.710 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.710 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.710 10:52:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.710 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.710 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.710 10:52:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:28.971 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.972 nvme0n1 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.972 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.233 nvme0n1 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.233 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.494 nvme0n1 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.494 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.495 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.755 nvme0n1 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:29.756 10:52:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.016 nvme0n1 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.016 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.017 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.278 nvme0n1 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.278 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.539 nvme0n1 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.539 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.800 nvme0n1 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:30.800 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:30.801 10:52:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.062 nvme0n1 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.062 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.323 nvme0n1 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.323 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.324 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.324 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.324 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.324 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.324 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.324 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.324 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.324 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.584 nvme0n1 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.584 10:52:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.845 nvme0n1 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.845 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.105 nvme0n1 00:27:32.105 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.366 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.626 nvme0n1 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.626 10:52:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.886 nvme0n1 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.886 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.887 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.887 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.887 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.887 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.887 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.887 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.457 nvme0n1 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.457 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.458 10:52:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.029 nvme0n1 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.029 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.598 nvme0n1 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.598 10:52:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.178 nvme0n1 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.178 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.179 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.179 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.179 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.179 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.439 nvme0n1 00:27:35.439 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.439 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.439 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.439 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.439 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.439 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.699 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjRjMzlmNzE5YmY3MzViNjVlMGZhMmQ2YTU2M2NlYzVcMolM: 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: ]] 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmM2NmU1Zjg3ZTI3MTdkN2MxZWQyM2IyMDFjNWI1ZmEwMzkxYjNiZTRiYjYyN2EzODJkODJhMDI3NjM2MmM4Yks1XVw=: 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.700 10:52:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.270 nvme0n1 00:27:36.270 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.270 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.270 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.270 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.270 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.270 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.531 10:53:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.102 nvme0n1 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.102 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTQzNTIwODcyZTU3YWIxNTNhNjBmZmM5YTFkZjE3ODGHs8sh: 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: ]] 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTc1M2MwNDQxMjZlNTVjM2U5N2JmODZmMzBkMzI5Mma2qZlG: 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.363 10:53:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.364 10:53:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.364 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.364 10:53:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.935 nvme0n1 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Yzg5N2MwNDgzMDQyMDY0YjQzNTVlZjJmN2IwYzFiNjA1NTliNWUzZTg2ZGVkMzM26lD5xA==: 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: ]] 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5Y2FjYTIwMmFhY2YzYjQ2Zjk5ZDk4YWEzN2VhNmalXPDr: 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.935 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.195 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.195 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.195 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.195 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.195 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.195 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.196 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.196 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.196 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.196 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.196 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.196 10:53:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.196 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.196 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.196 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.783 nvme0n1 00:27:38.783 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.783 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.783 10:53:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.783 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.783 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.783 10:53:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxM2RlYjdmNjZmZmI4YTkwNDgwODc0MmVjNzMxNzAxZDFjOWU0Y2YxY2ZmODc3MDI2NGZhYzc3NzliZDk0YipuvLY=: 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.783 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.733 nvme0n1 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQzYTFhYzUyNzVkMDllN2JkZWYwNTMxYWEyODMyYWIwNDU4MTc2YjZhYjA2Mzg4OLc2PA==: 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzdiNjAxZDdiZWFiM2EyMzViZDRlN2VmMTY5NjUxZDBmOTI4NzRiMjg5N2RmMjU0Jj3I4Q==: 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.733 request: 00:27:39.733 { 00:27:39.733 "name": "nvme0", 00:27:39.733 "trtype": "tcp", 00:27:39.733 "traddr": "10.0.0.1", 00:27:39.733 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:39.733 "adrfam": "ipv4", 00:27:39.733 "trsvcid": "4420", 00:27:39.733 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:39.733 "method": "bdev_nvme_attach_controller", 00:27:39.733 "req_id": 1 00:27:39.733 } 00:27:39.733 Got JSON-RPC error response 00:27:39.733 response: 00:27:39.733 { 00:27:39.733 "code": -5, 00:27:39.733 "message": "Input/output error" 00:27:39.733 } 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:39.733 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.734 request: 00:27:39.734 { 00:27:39.734 "name": "nvme0", 00:27:39.734 "trtype": "tcp", 00:27:39.734 "traddr": "10.0.0.1", 00:27:39.734 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:39.734 "adrfam": "ipv4", 00:27:39.734 "trsvcid": "4420", 00:27:39.734 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:39.734 "dhchap_key": "key2", 00:27:39.734 "method": "bdev_nvme_attach_controller", 00:27:39.734 "req_id": 1 00:27:39.734 } 00:27:39.734 Got JSON-RPC error response 00:27:39.734 response: 00:27:39.734 { 00:27:39.734 "code": -5, 00:27:39.734 "message": "Input/output error" 00:27:39.734 } 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.734 10:53:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.994 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:39.994 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:39.994 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.994 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.994 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.994 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.995 request: 00:27:39.995 { 00:27:39.995 "name": "nvme0", 00:27:39.995 "trtype": "tcp", 00:27:39.995 "traddr": "10.0.0.1", 00:27:39.995 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:39.995 "adrfam": "ipv4", 00:27:39.995 "trsvcid": "4420", 00:27:39.995 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:39.995 "dhchap_key": "key1", 00:27:39.995 "dhchap_ctrlr_key": "ckey2", 00:27:39.995 "method": "bdev_nvme_attach_controller", 00:27:39.995 "req_id": 1 00:27:39.995 } 00:27:39.995 Got JSON-RPC error response 00:27:39.995 response: 00:27:39.995 { 00:27:39.995 "code": -5, 00:27:39.995 "message": "Input/output error" 00:27:39.995 } 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:39.995 rmmod nvme_tcp 00:27:39.995 rmmod nvme_fabrics 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 988898 ']' 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 988898 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 988898 ']' 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 988898 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 988898 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 988898' 00:27:39.995 killing process with pid 988898 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 988898 00:27:39.995 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 988898 00:27:40.257 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:40.257 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:40.257 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:40.257 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:40.257 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:40.257 10:53:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.257 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.257 10:53:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:42.171 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:42.433 10:53:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:45.776 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:45.776 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:46.036 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:46.036 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.LIk /tmp/spdk.key-null.XGn /tmp/spdk.key-sha256.aUt /tmp/spdk.key-sha384.gOg /tmp/spdk.key-sha512.0oQ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:46.036 10:53:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:49.336 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:49.336 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:49.336 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:49.597 00:27:49.597 real 0m56.677s 00:27:49.597 user 0m50.653s 00:27:49.597 sys 0m14.639s 00:27:49.597 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:49.597 10:53:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.597 ************************************ 00:27:49.597 END TEST nvmf_auth_host 00:27:49.597 ************************************ 00:27:49.597 10:53:13 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:27:49.597 10:53:13 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:49.597 10:53:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:49.597 10:53:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:49.597 10:53:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:49.597 ************************************ 00:27:49.597 START TEST nvmf_digest 00:27:49.597 ************************************ 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:49.597 * Looking for test storage... 00:27:49.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:49.597 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:49.858 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:49.859 10:53:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.007 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:58.008 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:58.008 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:58.008 Found net devices under 0000:31:00.0: cvl_0_0 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:58.008 Found net devices under 0000:31:00.1: cvl_0_1 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.008 10:53:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:27:58.008 00:27:58.008 --- 10.0.0.2 ping statistics --- 00:27:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.008 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:27:58.008 00:27:58.008 --- 10.0.0.1 ping statistics --- 00:27:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.008 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:58.008 10:53:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:58.008 ************************************ 00:27:58.008 START TEST nvmf_digest_clean 00:27:58.008 ************************************ 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1005299 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1005299 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1005299 ']' 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:58.009 10:53:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.009 [2024-06-10 10:53:21.300559] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:27:58.009 [2024-06-10 10:53:21.300610] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.009 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.009 [2024-06-10 10:53:21.368714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.009 [2024-06-10 10:53:21.437004] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.009 [2024-06-10 10:53:21.437040] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.009 [2024-06-10 10:53:21.437048] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.009 [2024-06-10 10:53:21.437054] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.009 [2024-06-10 10:53:21.437060] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.009 [2024-06-10 10:53:21.437083] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.009 null0 00:27:58.009 [2024-06-10 10:53:22.163935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.009 [2024-06-10 10:53:22.187922] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:58.009 [2024-06-10 10:53:22.188124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1005422 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1005422 /var/tmp/bperf.sock 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1005422 ']' 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:58.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:58.009 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.009 [2024-06-10 10:53:22.252589] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:27:58.009 [2024-06-10 10:53:22.252654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005422 ] 00:27:58.009 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.271 [2024-06-10 10:53:22.328940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.271 [2024-06-10 10:53:22.393054] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.841 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:58.841 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:27:58.841 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:58.841 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:58.841 10:53:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:59.103 10:53:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.103 10:53:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.364 nvme0n1 00:27:59.364 10:53:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:59.364 10:53:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:59.364 Running I/O for 2 seconds... 00:28:01.274 00:28:01.274 Latency(us) 00:28:01.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.274 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:01.274 nvme0n1 : 2.00 20593.48 80.44 0.00 0.00 6208.08 3072.00 14745.60 00:28:01.274 =================================================================================================================== 00:28:01.274 Total : 20593.48 80.44 0.00 0.00 6208.08 3072.00 14745.60 00:28:01.274 0 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:01.535 | select(.opcode=="crc32c") 00:28:01.535 | "\(.module_name) \(.executed)"' 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1005422 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1005422 ']' 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1005422 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1005422 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1005422' 00:28:01.535 killing process with pid 1005422 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1005422 00:28:01.535 Received shutdown signal, test time was about 2.000000 seconds 00:28:01.535 00:28:01.535 Latency(us) 00:28:01.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.535 =================================================================================================================== 00:28:01.535 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.535 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1005422 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1006190 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1006190 /var/tmp/bperf.sock 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1006190 ']' 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:01.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:01.796 10:53:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:01.796 [2024-06-10 10:53:25.961451] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:01.796 [2024-06-10 10:53:25.961504] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006190 ] 00:28:01.796 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:01.796 Zero copy mechanism will not be used. 00:28:01.796 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.796 [2024-06-10 10:53:26.036881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.057 [2024-06-10 10:53:26.090232] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.628 10:53:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:02.628 10:53:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:02.628 10:53:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:02.628 10:53:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:02.628 10:53:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:02.889 10:53:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.889 10:53:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.149 nvme0n1 00:28:03.149 10:53:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:03.149 10:53:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:03.149 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:03.149 Zero copy mechanism will not be used. 00:28:03.149 Running I/O for 2 seconds... 00:28:05.063 00:28:05.063 Latency(us) 00:28:05.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.063 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:05.063 nvme0n1 : 2.00 2824.43 353.05 0.00 0.00 5662.35 1788.59 8847.36 00:28:05.063 =================================================================================================================== 00:28:05.063 Total : 2824.43 353.05 0.00 0.00 5662.35 1788.59 8847.36 00:28:05.063 0 00:28:05.063 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:05.063 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:05.063 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:05.063 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:05.063 | select(.opcode=="crc32c") 00:28:05.063 | "\(.module_name) \(.executed)"' 00:28:05.063 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1006190 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1006190 ']' 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1006190 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1006190 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1006190' 00:28:05.324 killing process with pid 1006190 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1006190 00:28:05.324 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.324 00:28:05.324 Latency(us) 00:28:05.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.324 =================================================================================================================== 00:28:05.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.324 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1006190 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1006957 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1006957 /var/tmp/bperf.sock 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1006957 ']' 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:05.585 10:53:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.585 [2024-06-10 10:53:29.694207] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:05.585 [2024-06-10 10:53:29.694316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006957 ] 00:28:05.585 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.585 [2024-06-10 10:53:29.770495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.585 [2024-06-10 10:53:29.823592] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.527 10:53:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:06.527 10:53:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:06.527 10:53:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:06.527 10:53:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:06.527 10:53:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:06.527 10:53:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.527 10:53:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.788 nvme0n1 00:28:06.788 10:53:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:06.788 10:53:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.788 Running I/O for 2 seconds... 00:28:08.700 00:28:08.700 Latency(us) 00:28:08.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.700 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:08.700 nvme0n1 : 2.00 21941.01 85.71 0.00 0.00 5826.39 2252.80 11905.71 00:28:08.700 =================================================================================================================== 00:28:08.700 Total : 21941.01 85.71 0.00 0.00 5826.39 2252.80 11905.71 00:28:08.700 0 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:08.960 | select(.opcode=="crc32c") 00:28:08.960 | "\(.module_name) \(.executed)"' 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1006957 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1006957 ']' 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1006957 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1006957 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1006957' 00:28:08.960 killing process with pid 1006957 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1006957 00:28:08.960 Received shutdown signal, test time was about 2.000000 seconds 00:28:08.960 00:28:08.960 Latency(us) 00:28:08.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.960 =================================================================================================================== 00:28:08.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.960 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1006957 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1007694 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1007694 /var/tmp/bperf.sock 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 1007694 ']' 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:09.221 10:53:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.221 [2024-06-10 10:53:33.380260] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:09.221 [2024-06-10 10:53:33.380337] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007694 ] 00:28:09.221 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:09.221 Zero copy mechanism will not be used. 00:28:09.221 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.221 [2024-06-10 10:53:33.456695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.481 [2024-06-10 10:53:33.509994] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.052 10:53:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:10.052 10:53:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:28:10.052 10:53:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:10.052 10:53:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:10.052 10:53:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:10.052 10:53:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.052 10:53:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.623 nvme0n1 00:28:10.623 10:53:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:10.623 10:53:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:10.623 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:10.623 Zero copy mechanism will not be used. 00:28:10.623 Running I/O for 2 seconds... 00:28:12.537 00:28:12.537 Latency(us) 00:28:12.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.537 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:12.537 nvme0n1 : 2.00 4543.32 567.92 0.00 0.00 3515.58 1856.85 16493.23 00:28:12.537 =================================================================================================================== 00:28:12.537 Total : 4543.32 567.92 0.00 0.00 3515.58 1856.85 16493.23 00:28:12.537 0 00:28:12.537 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:12.537 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:12.537 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:12.537 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:12.537 | select(.opcode=="crc32c") 00:28:12.537 | "\(.module_name) \(.executed)"' 00:28:12.537 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1007694 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1007694 ']' 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1007694 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:12.797 10:53:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1007694 00:28:12.797 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:12.797 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:12.797 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1007694' 00:28:12.797 killing process with pid 1007694 00:28:12.797 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1007694 00:28:12.797 Received shutdown signal, test time was about 2.000000 seconds 00:28:12.797 00:28:12.797 Latency(us) 00:28:12.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.797 =================================================================================================================== 00:28:12.797 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.797 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1007694 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1005299 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 1005299 ']' 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 1005299 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1005299 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1005299' 00:28:13.059 killing process with pid 1005299 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 1005299 00:28:13.059 [2024-06-10 10:53:37.187659] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 1005299 00:28:13.059 00:28:13.059 real 0m16.091s 00:28:13.059 user 0m31.508s 00:28:13.059 sys 0m3.261s 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:13.059 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.059 ************************************ 00:28:13.059 END TEST nvmf_digest_clean 00:28:13.059 ************************************ 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:13.320 ************************************ 00:28:13.320 START TEST nvmf_digest_error 00:28:13.320 ************************************ 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1008404 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1008404 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1008404 ']' 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:13.320 10:53:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.320 [2024-06-10 10:53:37.452722] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:13.320 [2024-06-10 10:53:37.452767] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.320 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.320 [2024-06-10 10:53:37.517614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.320 [2024-06-10 10:53:37.580159] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.320 [2024-06-10 10:53:37.580198] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.320 [2024-06-10 10:53:37.580205] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.320 [2024-06-10 10:53:37.580215] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.320 [2024-06-10 10:53:37.580221] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.320 [2024-06-10 10:53:37.580240] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.263 [2024-06-10 10:53:38.274226] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.263 null0 00:28:14.263 [2024-06-10 10:53:38.355150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.263 [2024-06-10 10:53:38.379143] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:14.263 [2024-06-10 10:53:38.379359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1008660 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1008660 /var/tmp/bperf.sock 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1008660 ']' 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:14.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:14.263 10:53:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.263 [2024-06-10 10:53:38.434356] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:14.263 [2024-06-10 10:53:38.434403] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008660 ] 00:28:14.263 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.263 [2024-06-10 10:53:38.510470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.524 [2024-06-10 10:53:38.565178] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.094 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.665 nvme0n1 00:28:15.665 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:15.665 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:15.665 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.665 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:15.665 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:15.665 10:53:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:15.665 Running I/O for 2 seconds... 00:28:15.665 [2024-06-10 10:53:39.806848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.665 [2024-06-10 10:53:39.806877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.665 [2024-06-10 10:53:39.806886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.665 [2024-06-10 10:53:39.818799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.665 [2024-06-10 10:53:39.818818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.665 [2024-06-10 10:53:39.818825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.665 [2024-06-10 10:53:39.830944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.665 [2024-06-10 10:53:39.830962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.665 [2024-06-10 10:53:39.830973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.665 [2024-06-10 10:53:39.843317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.665 [2024-06-10 10:53:39.843335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.665 [2024-06-10 10:53:39.843342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.665 [2024-06-10 10:53:39.856855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.665 [2024-06-10 10:53:39.856874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.665 [2024-06-10 10:53:39.856881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.665 [2024-06-10 10:53:39.869016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.665 [2024-06-10 10:53:39.869034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.666 [2024-06-10 10:53:39.869041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.666 [2024-06-10 10:53:39.880307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.666 [2024-06-10 10:53:39.880324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.666 [2024-06-10 10:53:39.880330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.666 [2024-06-10 10:53:39.892947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.666 [2024-06-10 10:53:39.892965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.666 [2024-06-10 10:53:39.892971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.666 [2024-06-10 10:53:39.906110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.666 [2024-06-10 10:53:39.906127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.666 [2024-06-10 10:53:39.906133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.666 [2024-06-10 10:53:39.917414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.666 [2024-06-10 10:53:39.917431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.666 [2024-06-10 10:53:39.917438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.666 [2024-06-10 10:53:39.930626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.666 [2024-06-10 10:53:39.930643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.666 [2024-06-10 10:53:39.930649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.666 [2024-06-10 10:53:39.943215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.666 [2024-06-10 10:53:39.943237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.666 [2024-06-10 10:53:39.943256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:39.954654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:39.954671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:39.954677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:39.968030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:39.968047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:39.968054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:39.980442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:39.980459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:39.980465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:39.991293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:39.991310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:39.991316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.005620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.005638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.005645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.016661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.016678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.016685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.030236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.030257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.030264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.041529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.041546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.041552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.053307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.053324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.053330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.066311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.066329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.066335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.080064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.080080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.080087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.091649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.091666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.091672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.103477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.103494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.103501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.114503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.114520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.114527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.129549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.129565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.129572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.140031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.140048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.140055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.152638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.152660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.152666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.166194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.927 [2024-06-10 10:53:40.166210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.927 [2024-06-10 10:53:40.166217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.927 [2024-06-10 10:53:40.176967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.928 [2024-06-10 10:53:40.176984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.928 [2024-06-10 10:53:40.176991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.928 [2024-06-10 10:53:40.188889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.928 [2024-06-10 10:53:40.188907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.928 [2024-06-10 10:53:40.188913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.928 [2024-06-10 10:53:40.201375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.928 [2024-06-10 10:53:40.201392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.928 [2024-06-10 10:53:40.201398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.928 [2024-06-10 10:53:40.213294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:15.928 [2024-06-10 10:53:40.213311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.928 [2024-06-10 10:53:40.213317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.188 [2024-06-10 10:53:40.225748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.188 [2024-06-10 10:53:40.225765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.225771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.239255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.239271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.239277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.250376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.250394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.250400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.262983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.262999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.263005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.275902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.275918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.275925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.288138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.288154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.288160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.300603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.300620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.300627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.310339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.310356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.310362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.324146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.324163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.324169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.335339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.335356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.335362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.347508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.347525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.347531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.360627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.360644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.360653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.372459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.372475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.372480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.383990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.384006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.384013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.396359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.396376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.396383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.408773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.408790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.408796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.420900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.420917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.420923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.433040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.433056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.433063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.445953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.445969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.445975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.458666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.458682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.458689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.189 [2024-06-10 10:53:40.471697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.189 [2024-06-10 10:53:40.471720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.189 [2024-06-10 10:53:40.471726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.450 [2024-06-10 10:53:40.484154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.450 [2024-06-10 10:53:40.484170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.450 [2024-06-10 10:53:40.484177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.450 [2024-06-10 10:53:40.494905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.450 [2024-06-10 10:53:40.494922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.450 [2024-06-10 10:53:40.494929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.450 [2024-06-10 10:53:40.508419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.450 [2024-06-10 10:53:40.508435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.450 [2024-06-10 10:53:40.508442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.450 [2024-06-10 10:53:40.520899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.450 [2024-06-10 10:53:40.520915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.450 [2024-06-10 10:53:40.520922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.450 [2024-06-10 10:53:40.532811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.450 [2024-06-10 10:53:40.532828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.450 [2024-06-10 10:53:40.532834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.450 [2024-06-10 10:53:40.545516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.450 [2024-06-10 10:53:40.545533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.450 [2024-06-10 10:53:40.545540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.450 [2024-06-10 10:53:40.558073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.450 [2024-06-10 10:53:40.558090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.450 [2024-06-10 10:53:40.558096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.450 [2024-06-10 10:53:40.569204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.450 [2024-06-10 10:53:40.569220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.569227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.581411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.581428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.581434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.594909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.594926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.594932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.606944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.606961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.606967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.617711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.617727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.617734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.631197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.631213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.631219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.643692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.643708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.643714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.655715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.655731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.655737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.667944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.667960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.667966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.680848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.680864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.680873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.692128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.692144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.692151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.705341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.705358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.705364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.716859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.716875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.716881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.451 [2024-06-10 10:53:40.729379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.451 [2024-06-10 10:53:40.729395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.451 [2024-06-10 10:53:40.729401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.712 [2024-06-10 10:53:40.742445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.712 [2024-06-10 10:53:40.742461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.712 [2024-06-10 10:53:40.742467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.712 [2024-06-10 10:53:40.754628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.712 [2024-06-10 10:53:40.754644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.712 [2024-06-10 10:53:40.754650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.712 [2024-06-10 10:53:40.765095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.712 [2024-06-10 10:53:40.765111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.712 [2024-06-10 10:53:40.765117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.712 [2024-06-10 10:53:40.777995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.712 [2024-06-10 10:53:40.778011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.712 [2024-06-10 10:53:40.778017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.712 [2024-06-10 10:53:40.789783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.712 [2024-06-10 10:53:40.789800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.712 [2024-06-10 10:53:40.789806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.712 [2024-06-10 10:53:40.802608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.712 [2024-06-10 10:53:40.802625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.712 [2024-06-10 10:53:40.802631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.712 [2024-06-10 10:53:40.815157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.712 [2024-06-10 10:53:40.815174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.712 [2024-06-10 10:53:40.815180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.712 [2024-06-10 10:53:40.826799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.712 [2024-06-10 10:53:40.826815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.712 [2024-06-10 10:53:40.826822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.839564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.839581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.839587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.850615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.850632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.850638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.863864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.863880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.863886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.876667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.876683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.876689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.888254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.888271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.888280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.900796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.900812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.900818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.912461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.912477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.912483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.926498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.926515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.926521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.937644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.937660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.937666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.950280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.950295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.950302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.962396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.962413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.962419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.972799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.972815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.972821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.985807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.985823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.985829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.713 [2024-06-10 10:53:40.997708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.713 [2024-06-10 10:53:40.997727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.713 [2024-06-10 10:53:40.997734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.010779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.010795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.010802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.023663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.023680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.023686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.035524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.035540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.035546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.046319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.046335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.046341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.058876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.058893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.058899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.071533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.071549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.071555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.084435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.084452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.084458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.097423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.097439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.097445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.109269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.109285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.109291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.119882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.119898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.119904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.131928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.131945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.131951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.144913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.144929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.144935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.157110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.157126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.157133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.170176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.170192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.170198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.974 [2024-06-10 10:53:41.181656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.974 [2024-06-10 10:53:41.181672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.974 [2024-06-10 10:53:41.181678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.975 [2024-06-10 10:53:41.193177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.975 [2024-06-10 10:53:41.193194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.975 [2024-06-10 10:53:41.193200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.975 [2024-06-10 10:53:41.207115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.975 [2024-06-10 10:53:41.207132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.975 [2024-06-10 10:53:41.207141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.975 [2024-06-10 10:53:41.219469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.975 [2024-06-10 10:53:41.219485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.975 [2024-06-10 10:53:41.219492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.975 [2024-06-10 10:53:41.230861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.975 [2024-06-10 10:53:41.230878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.975 [2024-06-10 10:53:41.230884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.975 [2024-06-10 10:53:41.243384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.975 [2024-06-10 10:53:41.243400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.975 [2024-06-10 10:53:41.243406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.975 [2024-06-10 10:53:41.255315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:16.975 [2024-06-10 10:53:41.255332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.975 [2024-06-10 10:53:41.255338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.267709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.267726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.267733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.279182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.279198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.279204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.291785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.291801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.291807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.304085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.304102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.304108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.317223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.317239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.317250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.329167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.329184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.329190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.340352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.340369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.340375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.352659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.352675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.352682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.364742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.364759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.364765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.377023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.377040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.377046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.389221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.389238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.389249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.402477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.402495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.402501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.413274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.413291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.413300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.427781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.427797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.427803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.438321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.438337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.438344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.451201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.451219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.451225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.465042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.465059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.465065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.476944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.476960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.476966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.489628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.236 [2024-06-10 10:53:41.489644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.236 [2024-06-10 10:53:41.489651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.236 [2024-06-10 10:53:41.500945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.237 [2024-06-10 10:53:41.500961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.237 [2024-06-10 10:53:41.500968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.237 [2024-06-10 10:53:41.511765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.237 [2024-06-10 10:53:41.511782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.237 [2024-06-10 10:53:41.511789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.496 [2024-06-10 10:53:41.524511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.524531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.524538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.537295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.537312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.537318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.549527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.549545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.549551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.561402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.561419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.561425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.573049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.573065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.573071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.586576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.586593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.586599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.598546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.598562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.598568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.610909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.610926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.610932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.623453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.623470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.623476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.635981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.635997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.636003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.647784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.647801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.647808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.659511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.659528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.659534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.672226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.672246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.672252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.684736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.684752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.684758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.695747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.695764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.695770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.707101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.707118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.707124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.720999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.721015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.721021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.732621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.732638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.732647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.744876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.744893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.744900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.757213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.757230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.757236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.769661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.769677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.769683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.497 [2024-06-10 10:53:41.781103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x180fe60) 00:28:17.497 [2024-06-10 10:53:41.781119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.497 [2024-06-10 10:53:41.781126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.758 00:28:17.758 Latency(us) 00:28:17.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.758 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:17.758 nvme0n1 : 2.00 20749.15 81.05 0.00 0.00 6162.75 2594.13 19333.12 00:28:17.758 =================================================================================================================== 00:28:17.758 Total : 20749.15 81.05 0.00 0.00 6162.75 2594.13 19333.12 00:28:17.758 0 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:17.758 | .driver_specific 00:28:17.758 | .nvme_error 00:28:17.758 | .status_code 00:28:17.758 | .command_transient_transport_error' 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1008660 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1008660 ']' 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1008660 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:17.758 10:53:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1008660 00:28:17.758 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:17.758 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:17.758 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1008660' 00:28:17.758 killing process with pid 1008660 00:28:17.758 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1008660 00:28:17.758 Received shutdown signal, test time was about 2.000000 seconds 00:28:17.758 00:28:17.758 Latency(us) 00:28:17.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.758 =================================================================================================================== 00:28:17.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.758 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1008660 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1009438 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1009438 /var/tmp/bperf.sock 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1009438 ']' 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:18.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:18.019 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.019 [2024-06-10 10:53:42.192169] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:18.019 [2024-06-10 10:53:42.192226] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009438 ] 00:28:18.019 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.019 Zero copy mechanism will not be used. 00:28:18.019 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.019 [2024-06-10 10:53:42.267441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.280 [2024-06-10 10:53:42.320250] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.850 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:18.850 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:18.850 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:18.850 10:53:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:18.850 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:18.850 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:18.850 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.111 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:19.111 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.111 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.111 nvme0n1 00:28:19.372 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:19.372 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:19.372 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.372 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:19.373 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:19.373 10:53:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:19.373 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:19.373 Zero copy mechanism will not be used. 00:28:19.373 Running I/O for 2 seconds... 00:28:19.373 [2024-06-10 10:53:43.521599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.521629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.521638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.533342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.533365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.533372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.546602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.546622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.546628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.559313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.559332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.559338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.567977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.567995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.568001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.578423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.578441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.578448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.588911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.588929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.588936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.601016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.601034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.601041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.611572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.611590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.611597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.622003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.622020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.622027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.633045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.633063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.633070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.644436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.644454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.644461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.373 [2024-06-10 10:53:43.654622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.373 [2024-06-10 10:53:43.654640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.373 [2024-06-10 10:53:43.654646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.634 [2024-06-10 10:53:43.665738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.634 [2024-06-10 10:53:43.665756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.634 [2024-06-10 10:53:43.665766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.634 [2024-06-10 10:53:43.678275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.634 [2024-06-10 10:53:43.678293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.634 [2024-06-10 10:53:43.678299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.634 [2024-06-10 10:53:43.690674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.634 [2024-06-10 10:53:43.690692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.634 [2024-06-10 10:53:43.690698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.634 [2024-06-10 10:53:43.704342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.634 [2024-06-10 10:53:43.704362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.634 [2024-06-10 10:53:43.704368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.634 [2024-06-10 10:53:43.712788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.634 [2024-06-10 10:53:43.712807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.634 [2024-06-10 10:53:43.712813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.634 [2024-06-10 10:53:43.722485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.634 [2024-06-10 10:53:43.722502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.634 [2024-06-10 10:53:43.722509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.634 [2024-06-10 10:53:43.733256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.634 [2024-06-10 10:53:43.733274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.634 [2024-06-10 10:53:43.733281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.634 [2024-06-10 10:53:43.744806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.634 [2024-06-10 10:53:43.744824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.634 [2024-06-10 10:53:43.744830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.634 [2024-06-10 10:53:43.756159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.634 [2024-06-10 10:53:43.756177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.634 [2024-06-10 10:53:43.756184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.765922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.765943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.765950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.778694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.778714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.778721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.792421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.792440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.792446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.804660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.804679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.804686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.817928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.817947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.817953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.830841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.830860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.830866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.844505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.844524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.844531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.856123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.856142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.856148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.868506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.868525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.868532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.881620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.881639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.881645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.894637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.894656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.894663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.906336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.906354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.906360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.635 [2024-06-10 10:53:43.917430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.635 [2024-06-10 10:53:43.917449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.635 [2024-06-10 10:53:43.917455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.896 [2024-06-10 10:53:43.928401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.896 [2024-06-10 10:53:43.928420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.896 [2024-06-10 10:53:43.928427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.896 [2024-06-10 10:53:43.940517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.896 [2024-06-10 10:53:43.940535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.896 [2024-06-10 10:53:43.940542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.896 [2024-06-10 10:53:43.951296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.896 [2024-06-10 10:53:43.951314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.896 [2024-06-10 10:53:43.951320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.896 [2024-06-10 10:53:43.962089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.896 [2024-06-10 10:53:43.962108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.896 [2024-06-10 10:53:43.962114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.896 [2024-06-10 10:53:43.973371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.896 [2024-06-10 10:53:43.973389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.896 [2024-06-10 10:53:43.973399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.896 [2024-06-10 10:53:43.985519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:43.985538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:43.985544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:43.996932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:43.996950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:43.996956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.008747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.008765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.008771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.022896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.022913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.022920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.036859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.036878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.036884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.049468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.049487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.049493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.063214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.063232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.063239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.077329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.077347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.077354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.089956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.089976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.089982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.099405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.099423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.099429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.109642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.109660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.109667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.119941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.119960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.119968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.131607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.131626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.131632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.142546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.142565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.142572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.152579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.152598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.152604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.163877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.163895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.163901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.897 [2024-06-10 10:53:44.175188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:19.897 [2024-06-10 10:53:44.175208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.897 [2024-06-10 10:53:44.175214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.159 [2024-06-10 10:53:44.186556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.159 [2024-06-10 10:53:44.186576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.159 [2024-06-10 10:53:44.186582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.159 [2024-06-10 10:53:44.197432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.159 [2024-06-10 10:53:44.197450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.159 [2024-06-10 10:53:44.197456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.159 [2024-06-10 10:53:44.206294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.159 [2024-06-10 10:53:44.206312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.159 [2024-06-10 10:53:44.206319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.159 [2024-06-10 10:53:44.214167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.159 [2024-06-10 10:53:44.214186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.214192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.223738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.223757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.223763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.234003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.234022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.234028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.244738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.244756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.244764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.255097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.255115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.255121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.268344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.268365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.268371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.279490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.279508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.279515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.291028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.291047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.291053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.303514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.303533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.303539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.314334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.314352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.314359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.323240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.323263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.323270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.333396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.333415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.333421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.344608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.344626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.344633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.356101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.356120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.356126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.366735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.366753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.366760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.377908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.377926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.377932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.388994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.389013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.389019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.400851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.400870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.400876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.411927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.411945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.411951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.422346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.422364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.422370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.433334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.433352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.433358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.160 [2024-06-10 10:53:44.444025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.160 [2024-06-10 10:53:44.444044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.160 [2024-06-10 10:53:44.444050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.457 [2024-06-10 10:53:44.454856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.457 [2024-06-10 10:53:44.454875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.457 [2024-06-10 10:53:44.454884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.457 [2024-06-10 10:53:44.467001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.457 [2024-06-10 10:53:44.467019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.457 [2024-06-10 10:53:44.467025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.457 [2024-06-10 10:53:44.479976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.457 [2024-06-10 10:53:44.479994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.457 [2024-06-10 10:53:44.480001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.457 [2024-06-10 10:53:44.490297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.457 [2024-06-10 10:53:44.490314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.457 [2024-06-10 10:53:44.490320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.457 [2024-06-10 10:53:44.500021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.457 [2024-06-10 10:53:44.500040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.457 [2024-06-10 10:53:44.500046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.457 [2024-06-10 10:53:44.510414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.457 [2024-06-10 10:53:44.510433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.510439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.523389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.523407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.523413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.536305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.536323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.536330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.549639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.549657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.549664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.560925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.560947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.560953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.572122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.572140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.572147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.582598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.582616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.582623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.593626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.593644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.593651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.604163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.604181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.604187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.615908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.615925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.615932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.627907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.627927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.627934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.639164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.639182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.639189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.649416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.649435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.649441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.659816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.659835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.659841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.669277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.669295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.669302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.680533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.680552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.680558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.692515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.692534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.692540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.703301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.703319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.703326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.458 [2024-06-10 10:53:44.713747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.458 [2024-06-10 10:53:44.713766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.458 [2024-06-10 10:53:44.713772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.723420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.723439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.723445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.734087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.734106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.734112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.746913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.746931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.746940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.757634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.757652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.757659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.770447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.770466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.770472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.781217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.781235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.781247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.794153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.794172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.794178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.807523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.807541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.807547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.819884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.819902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.819908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.831994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.832012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.832019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.845987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.846004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.743 [2024-06-10 10:53:44.846011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.743 [2024-06-10 10:53:44.857628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.743 [2024-06-10 10:53:44.857650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.857657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.869379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.869397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.869403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.880126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.880144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.880150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.891506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.891525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.891532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.903837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.903855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.903861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.916420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.916439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.916446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.928300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.928319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.928325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.940504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.940523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.940529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.953236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.953259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.953265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.965335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.965353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.965359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.976829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.976847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.976853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.988487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.988505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.988511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:44.997668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:44.997685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:44.997692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:45.011901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:45.011919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:45.011925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.744 [2024-06-10 10:53:45.024050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:20.744 [2024-06-10 10:53:45.024069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.744 [2024-06-10 10:53:45.024075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.035356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.035374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.035380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.048426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.048445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.048452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.061566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.061586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.061596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.073899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.073917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.084076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.084094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.084101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.095961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.095979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.095985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.108235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.108261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.108268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.118898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.118917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.118923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.128787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.128806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.128813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.140010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.140028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.140035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.152007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.152025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.152032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.162477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.162500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.162506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.173219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.173238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.173249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.185302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.185320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.185326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.195677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.195697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.195703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.207120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.207139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.207146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.217250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.217269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.217275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.227680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.227699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.227705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.238148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.238168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.238174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.248509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.248527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.248533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.259513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.259531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.259538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.271167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.271187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.271194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.280897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.280916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.280923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.005 [2024-06-10 10:53:45.289944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.005 [2024-06-10 10:53:45.289963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.005 [2024-06-10 10:53:45.289970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.299633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.299653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.299659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.310822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.310841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.310848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.321192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.321210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.321217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.332380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.332399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.332405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.343664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.343683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.343693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.354359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.354378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.354385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.364744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.364763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.364769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.376891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.376910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.376916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.387859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.387878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.387885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.399895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.399913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.399920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.411899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.411918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.411924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.423488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.423507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.423513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.435059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.435077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.435084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.447083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.447102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.447108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.458501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.458520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.458526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.468906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.468925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.468932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.479372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.479390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.479397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.491022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.491041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.491048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.266 [2024-06-10 10:53:45.502345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x581de0) 00:28:21.266 [2024-06-10 10:53:45.502366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.266 [2024-06-10 10:53:45.502372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.266 00:28:21.266 Latency(us) 00:28:21.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.266 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:21.266 nvme0n1 : 2.00 2720.77 340.10 0.00 0.00 5877.58 1290.24 15182.51 00:28:21.267 =================================================================================================================== 00:28:21.267 Total : 2720.77 340.10 0.00 0.00 5877.58 1290.24 15182.51 00:28:21.267 0 00:28:21.267 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:21.267 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:21.267 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:21.267 | .driver_specific 00:28:21.267 | .nvme_error 00:28:21.267 | .status_code 00:28:21.267 | .command_transient_transport_error' 00:28:21.267 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 175 > 0 )) 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1009438 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1009438 ']' 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1009438 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1009438 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1009438' 00:28:21.527 killing process with pid 1009438 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1009438 00:28:21.527 Received shutdown signal, test time was about 2.000000 seconds 00:28:21.527 00:28:21.527 Latency(us) 00:28:21.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.527 =================================================================================================================== 00:28:21.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:21.527 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1009438 00:28:21.787 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:21.787 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1010130 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1010130 /var/tmp/bperf.sock 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1010130 ']' 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:21.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:21.788 10:53:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.788 [2024-06-10 10:53:45.925030] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:21.788 [2024-06-10 10:53:45.925085] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010130 ] 00:28:21.788 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.788 [2024-06-10 10:53:45.999532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.788 [2024-06-10 10:53:46.052518] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.728 10:53:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.989 nvme0n1 00:28:22.989 10:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:22.989 10:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:22.989 10:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.989 10:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:22.989 10:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:22.989 10:53:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.250 Running I/O for 2 seconds... 00:28:23.250 [2024-06-10 10:53:47.344975] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190f96f8 00:28:23.250 [2024-06-10 10:53:47.345943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.250 [2024-06-10 10:53:47.345972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.250 [2024-06-10 10:53:47.359779] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190de038 00:28:23.250 [2024-06-10 10:53:47.361600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.250 [2024-06-10 10:53:47.361619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:23.250 [2024-06-10 10:53:47.369212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190e7c50 00:28:23.250 [2024-06-10 10:53:47.370410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.250 [2024-06-10 10:53:47.370427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:23.250 [2024-06-10 10:53:47.381891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190fdeb0 00:28:23.250 [2024-06-10 10:53:47.383263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.250 [2024-06-10 10:53:47.383280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:23.250 [2024-06-10 10:53:47.394905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190fe720 00:28:23.250 [2024-06-10 10:53:47.396710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.250 [2024-06-10 10:53:47.396727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:23.250 [2024-06-10 10:53:47.404677] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.250 [2024-06-10 10:53:47.406027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.250 [2024-06-10 10:53:47.406044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:23.250 [2024-06-10 10:53:47.417082] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.250 [2024-06-10 10:53:47.418447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.250 [2024-06-10 10:53:47.418464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.250 [2024-06-10 10:53:47.428758] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.250 [2024-06-10 10:53:47.430118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.250 [2024-06-10 10:53:47.430136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.250 [2024-06-10 10:53:47.440454] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.250 [2024-06-10 10:53:47.441816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.250 [2024-06-10 10:53:47.441833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.250 [2024-06-10 10:53:47.452155] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.250 [2024-06-10 10:53:47.453532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.251 [2024-06-10 10:53:47.453549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.251 [2024-06-10 10:53:47.463956] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.251 [2024-06-10 10:53:47.465316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.251 [2024-06-10 10:53:47.465332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.251 [2024-06-10 10:53:47.475605] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.251 [2024-06-10 10:53:47.476959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.251 [2024-06-10 10:53:47.476976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.251 [2024-06-10 10:53:47.487238] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.251 [2024-06-10 10:53:47.488596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.251 [2024-06-10 10:53:47.488615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.251 [2024-06-10 10:53:47.498883] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.251 [2024-06-10 10:53:47.500239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.251 [2024-06-10 10:53:47.500259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.251 [2024-06-10 10:53:47.510525] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.251 [2024-06-10 10:53:47.511883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.251 [2024-06-10 10:53:47.511899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.251 [2024-06-10 10:53:47.522168] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.251 [2024-06-10 10:53:47.523538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.251 [2024-06-10 10:53:47.523555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.251 [2024-06-10 10:53:47.533834] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.251 [2024-06-10 10:53:47.535192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.251 [2024-06-10 10:53:47.535209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.545502] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.546862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.546879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.557141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.558503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.558519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.568775] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.570132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.570148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.580410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.581768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.581784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.592045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.593365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.593384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.603683] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.605010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.605027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.615333] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.616689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.616706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.626947] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.628298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.628315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.638605] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.639962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.639978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.650217] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.651580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.651597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.661905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.663257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.663273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.673537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.674899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.674915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.685194] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.686560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.686577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.696837] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.512 [2024-06-10 10:53:47.698198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-06-10 10:53:47.698215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.512 [2024-06-10 10:53:47.708492] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.513 [2024-06-10 10:53:47.709847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-06-10 10:53:47.709864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.513 [2024-06-10 10:53:47.720140] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.513 [2024-06-10 10:53:47.721507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-06-10 10:53:47.721524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.513 [2024-06-10 10:53:47.731805] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.513 [2024-06-10 10:53:47.733155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-06-10 10:53:47.733171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.513 [2024-06-10 10:53:47.743443] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.513 [2024-06-10 10:53:47.744810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-06-10 10:53:47.744826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.513 [2024-06-10 10:53:47.755073] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.513 [2024-06-10 10:53:47.756428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-06-10 10:53:47.756445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.513 [2024-06-10 10:53:47.766712] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.513 [2024-06-10 10:53:47.768067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-06-10 10:53:47.768083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.513 [2024-06-10 10:53:47.778351] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.513 [2024-06-10 10:53:47.779723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-06-10 10:53:47.779739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.513 [2024-06-10 10:53:47.789986] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.513 [2024-06-10 10:53:47.791359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-06-10 10:53:47.791375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.801639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.802987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.803004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.813263] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.814619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.814635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.824896] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.826256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.826272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.836533] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.837894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.837910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.848176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.849536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.849552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.859810] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.861170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.861186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.871488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.872841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.872858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.883286] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.884645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.884661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.895035] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.896360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.896382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.906667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.908023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.908038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.918305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.919641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.919657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.929927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.931282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.931299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.941580] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.942932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.942948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.953254] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.954602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.954618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.964903] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.966263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.966279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.976536] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.977894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.977910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.988188] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:47.989509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:47.989525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.774 [2024-06-10 10:53:47.999825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.774 [2024-06-10 10:53:48.001184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.774 [2024-06-10 10:53:48.001200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.775 [2024-06-10 10:53:48.011469] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.775 [2024-06-10 10:53:48.012823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.775 [2024-06-10 10:53:48.012838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.775 [2024-06-10 10:53:48.023088] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.775 [2024-06-10 10:53:48.024448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.775 [2024-06-10 10:53:48.024464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.775 [2024-06-10 10:53:48.034723] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.775 [2024-06-10 10:53:48.036075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.775 [2024-06-10 10:53:48.036091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.775 [2024-06-10 10:53:48.046345] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.775 [2024-06-10 10:53:48.047703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.775 [2024-06-10 10:53:48.047719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:23.775 [2024-06-10 10:53:48.057975] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:23.775 [2024-06-10 10:53:48.059324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.775 [2024-06-10 10:53:48.059340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.069616] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.070975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.070992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.081275] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.082629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.082645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.092900] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.094256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.094271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.104533] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.105850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.105866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.116166] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.117520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.117536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.127818] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.129177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.129193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.139436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.140791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.140807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.151058] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.152421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.152438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.162709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.164067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.164083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.174341] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.175699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.175715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.185970] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.187328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.187344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.197604] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.198958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.198976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.209208] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.210554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.210570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.220840] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.222197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.222213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.232465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.233815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.233831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.244109] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.245468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.245484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.255732] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.257107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.257123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.267386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.268756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.268771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.279029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.280408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.280424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.290709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.292060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.036 [2024-06-10 10:53:48.292076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.036 [2024-06-10 10:53:48.302357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.036 [2024-06-10 10:53:48.303735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.037 [2024-06-10 10:53:48.303751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.037 [2024-06-10 10:53:48.314007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.037 [2024-06-10 10:53:48.315364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.037 [2024-06-10 10:53:48.315380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.325652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.327012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.327029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.337300] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.338670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.338686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.348937] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.350286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.350302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.360600] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.361922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.361938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.372239] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.373620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.373636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.383887] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.385238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.385257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.395511] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.396828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.396844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.407151] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.408515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.408531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.418782] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.420134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.420150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.430424] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.431778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.431794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.442084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.443445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.443461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.453745] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.455115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.455132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.465470] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.466830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.466846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.477219] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.478556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.478572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.488898] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.490271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.490287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.500570] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.501929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.501949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.512209] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.513583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.298 [2024-06-10 10:53:48.513600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.298 [2024-06-10 10:53:48.523870] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.298 [2024-06-10 10:53:48.525227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.299 [2024-06-10 10:53:48.525247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.299 [2024-06-10 10:53:48.535522] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.299 [2024-06-10 10:53:48.536887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.299 [2024-06-10 10:53:48.536904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.299 [2024-06-10 10:53:48.547185] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.299 [2024-06-10 10:53:48.548550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.299 [2024-06-10 10:53:48.548567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.299 [2024-06-10 10:53:48.558940] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.299 [2024-06-10 10:53:48.560314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.299 [2024-06-10 10:53:48.560330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.299 [2024-06-10 10:53:48.570652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.299 [2024-06-10 10:53:48.572010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.299 [2024-06-10 10:53:48.572026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.299 [2024-06-10 10:53:48.582305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.299 [2024-06-10 10:53:48.583666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.299 [2024-06-10 10:53:48.583682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.593995] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.560 [2024-06-10 10:53:48.595355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.560 [2024-06-10 10:53:48.595371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.605656] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.560 [2024-06-10 10:53:48.607014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.560 [2024-06-10 10:53:48.607030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.617320] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.560 [2024-06-10 10:53:48.618680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.560 [2024-06-10 10:53:48.618696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.628984] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.560 [2024-06-10 10:53:48.630317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.560 [2024-06-10 10:53:48.630334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.640642] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.560 [2024-06-10 10:53:48.642001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.560 [2024-06-10 10:53:48.642017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.652291] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.560 [2024-06-10 10:53:48.653612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.560 [2024-06-10 10:53:48.653630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.663956] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.560 [2024-06-10 10:53:48.665313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.560 [2024-06-10 10:53:48.665329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.675608] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.560 [2024-06-10 10:53:48.676963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.560 [2024-06-10 10:53:48.676979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.687281] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.560 [2024-06-10 10:53:48.688610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.560 [2024-06-10 10:53:48.688626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.560 [2024-06-10 10:53:48.698953] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.700307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.700324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.710622] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.711977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.711994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.722317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.723675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.723692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.733984] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.735351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.735367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.745650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.747004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.747020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.757304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.758624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.758641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.768958] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.770282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.770299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.780626] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.781986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.782003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.792311] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.793683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.793700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.803977] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.805348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.805367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.815626] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.816978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.816995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.827303] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.828657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.828674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.561 [2024-06-10 10:53:48.838955] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.561 [2024-06-10 10:53:48.840316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.561 [2024-06-10 10:53:48.840333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.822 [2024-06-10 10:53:48.850626] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.822 [2024-06-10 10:53:48.851989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.822 [2024-06-10 10:53:48.852005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.822 [2024-06-10 10:53:48.862296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.822 [2024-06-10 10:53:48.863649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.822 [2024-06-10 10:53:48.863666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.822 [2024-06-10 10:53:48.873961] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.822 [2024-06-10 10:53:48.875319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.822 [2024-06-10 10:53:48.875335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.822 [2024-06-10 10:53:48.885790] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.822 [2024-06-10 10:53:48.887146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.822 [2024-06-10 10:53:48.887163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.822 [2024-06-10 10:53:48.897440] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.822 [2024-06-10 10:53:48.898811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.822 [2024-06-10 10:53:48.898828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.822 [2024-06-10 10:53:48.909098] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.822 [2024-06-10 10:53:48.910469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:48.910486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:48.920769] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:48.922130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:48.922146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:48.932423] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:48.933780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:48.933797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:48.944086] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:48.945444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:48.945461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:48.955744] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:48.957074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:48.957091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:48.967417] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:48.968788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:48.968805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:48.979069] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:48.980449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:48.980466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:48.990751] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:48.992111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:48.992127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.002438] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.003760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.003776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.014115] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.015460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.015476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.025766] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.027121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.027138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.037433] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.038792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.038808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.049093] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.050351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.050367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.060754] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.062109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.062125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.072414] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.073772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.073789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.084073] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.085436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.085451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.095722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.097072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.097090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:24.823 [2024-06-10 10:53:49.107386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:24.823 [2024-06-10 10:53:49.108745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.823 [2024-06-10 10:53:49.108764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.084 [2024-06-10 10:53:49.119060] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.084 [2024-06-10 10:53:49.120425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.084 [2024-06-10 10:53:49.120441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.084 [2024-06-10 10:53:49.130739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.084 [2024-06-10 10:53:49.132097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.084 [2024-06-10 10:53:49.132113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.084 [2024-06-10 10:53:49.142386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.084 [2024-06-10 10:53:49.143737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.084 [2024-06-10 10:53:49.143753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.084 [2024-06-10 10:53:49.154044] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.084 [2024-06-10 10:53:49.155362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.084 [2024-06-10 10:53:49.155378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.084 [2024-06-10 10:53:49.165698] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.084 [2024-06-10 10:53:49.167061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.084 [2024-06-10 10:53:49.167077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.084 [2024-06-10 10:53:49.177363] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.084 [2024-06-10 10:53:49.178716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.084 [2024-06-10 10:53:49.178733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.189029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.190363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.190379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.200710] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.202069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.202085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.212364] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.213725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.213740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.224017] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.225357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.225373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.235667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.237023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.237039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.247318] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.248637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.248654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.258981] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.260346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.260362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.270636] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.271991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.272007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.282258] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.283615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.283631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.293879] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.295235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.295253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.305506] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.306864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.306880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.317152] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.318470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.318486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.328795] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.330155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.330172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 [2024-06-10 10:53:49.340428] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf37f0) with pdu=0x2000190ecc78 00:28:25.085 [2024-06-10 10:53:49.341775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.085 [2024-06-10 10:53:49.341791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:25.085 00:28:25.085 Latency(us) 00:28:25.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.085 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:25.085 nvme0n1 : 2.01 21888.46 85.50 0.00 0.00 5839.36 2239.15 12014.93 00:28:25.085 =================================================================================================================== 00:28:25.085 Total : 21888.46 85.50 0.00 0.00 5839.36 2239.15 12014.93 00:28:25.085 0 00:28:25.085 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:25.085 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:25.085 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:25.085 | .driver_specific 00:28:25.085 | .nvme_error 00:28:25.085 | .status_code 00:28:25.085 | .command_transient_transport_error' 00:28:25.085 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 172 > 0 )) 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1010130 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1010130 ']' 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1010130 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1010130 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1010130' 00:28:25.345 killing process with pid 1010130 00:28:25.345 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1010130 00:28:25.345 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.345 00:28:25.345 Latency(us) 00:28:25.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.345 =================================================================================================================== 00:28:25.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.346 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1010130 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1010809 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1010809 /var/tmp/bperf.sock 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 1010809 ']' 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:25.606 10:53:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.606 [2024-06-10 10:53:49.742586] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:25.606 [2024-06-10 10:53:49.742644] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010809 ] 00:28:25.606 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.606 Zero copy mechanism will not be used. 00:28:25.606 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.606 [2024-06-10 10:53:49.817350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.606 [2024-06-10 10:53:49.870534] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.546 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.806 nvme0n1 00:28:26.806 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:26.806 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.806 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.806 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.806 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:26.806 10:53:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.806 Zero copy mechanism will not be used. 00:28:26.806 Running I/O for 2 seconds... 00:28:26.806 [2024-06-10 10:53:51.033897] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:26.806 [2024-06-10 10:53:51.034326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.806 [2024-06-10 10:53:51.034353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.806 [2024-06-10 10:53:51.046747] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:26.806 [2024-06-10 10:53:51.047124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.806 [2024-06-10 10:53:51.047146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:26.806 [2024-06-10 10:53:51.057898] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:26.806 [2024-06-10 10:53:51.058253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.806 [2024-06-10 10:53:51.058272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:26.806 [2024-06-10 10:53:51.069169] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:26.806 [2024-06-10 10:53:51.069517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.806 [2024-06-10 10:53:51.069535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:26.806 [2024-06-10 10:53:51.078537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:26.806 [2024-06-10 10:53:51.078892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.806 [2024-06-10 10:53:51.078910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:26.806 [2024-06-10 10:53:51.089490] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:26.806 [2024-06-10 10:53:51.089797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.807 [2024-06-10 10:53:51.089815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.067 [2024-06-10 10:53:51.099863] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.067 [2024-06-10 10:53:51.100216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.067 [2024-06-10 10:53:51.100238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.067 [2024-06-10 10:53:51.110504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.067 [2024-06-10 10:53:51.110807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.067 [2024-06-10 10:53:51.110825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.067 [2024-06-10 10:53:51.122370] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.067 [2024-06-10 10:53:51.122688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.067 [2024-06-10 10:53:51.122705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.067 [2024-06-10 10:53:51.133135] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.067 [2024-06-10 10:53:51.133457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.067 [2024-06-10 10:53:51.133474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.067 [2024-06-10 10:53:51.143735] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.067 [2024-06-10 10:53:51.144092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.067 [2024-06-10 10:53:51.144110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.067 [2024-06-10 10:53:51.155712] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.067 [2024-06-10 10:53:51.156022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.156040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.165746] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.166090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.166107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.174098] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.174332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.174349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.182161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.182388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.182405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.189930] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.190259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.190276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.199264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.199606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.199624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.205395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.205703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.205720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.211073] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.211301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.211318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.218300] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.218647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.218664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.227507] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.227721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.227737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.232193] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.232416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.232433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.238206] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.238436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.238452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.242629] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.242840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.242856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.247438] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.247660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.247677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.253512] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.253849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.253866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.264394] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.264728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.264745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.274303] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.274629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.274646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.281516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.281742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.281759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.291494] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.291711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.291728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.300855] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.301179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.301196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.307504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.307868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.307885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.316092] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.316402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.316423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.323045] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.323131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.323146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.329414] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.329618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.329634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.334794] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.334996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.335012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.339490] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.339689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.339705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.344436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.068 [2024-06-10 10:53:51.344755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-06-10 10:53:51.344772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.068 [2024-06-10 10:53:51.348743] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.069 [2024-06-10 10:53:51.348941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-06-10 10:53:51.348958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.069 [2024-06-10 10:53:51.353541] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.069 [2024-06-10 10:53:51.353740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-06-10 10:53:51.353756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.357703] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.357900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.357917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.361666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.361865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.361882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.366007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.366314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.366334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.370657] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.370853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.370869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.376041] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.376407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.376424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.381891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.382088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.382105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.386298] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.386496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.386513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.390631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.390832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.390848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.395366] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.395739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.395757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.404266] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.404525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.404545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.410832] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.411033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.411049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.416530] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.416729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-06-10 10:53:51.416746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.330 [2024-06-10 10:53:51.422987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.330 [2024-06-10 10:53:51.423345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.423362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.430126] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.430553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.430571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.435926] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.436153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.436169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.444224] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.444431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.444447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.450213] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.450417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.450434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.456143] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.456346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.456363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.462299] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.462504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.462521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.469467] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.469742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.469759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.477227] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.477465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.477481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.484010] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.484208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.484225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.488810] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.489010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.489026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.494084] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.494286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.494302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.498958] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.499158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.499175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.503796] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.504003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.504020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.508662] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.508859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.508875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.515111] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.515313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.515329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.519767] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.519964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.519980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.525723] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.525923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.525939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.532094] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.532299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.532315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.538117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.538321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.538337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.543740] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.543979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.543995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.552862] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.553061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.553078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.559416] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.559795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.559812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.566976] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.567339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.567359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.577172] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.577464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.577481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.585735] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.585959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.585975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.594758] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.331 [2024-06-10 10:53:51.594902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-06-10 10:53:51.594918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.331 [2024-06-10 10:53:51.605040] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.332 [2024-06-10 10:53:51.605402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-06-10 10:53:51.605420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.332 [2024-06-10 10:53:51.613164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.332 [2024-06-10 10:53:51.613444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-06-10 10:53:51.613461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.624910] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.625296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.625314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.632386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.632587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.632603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.638796] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.639002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.639018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.645157] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.645397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.645414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.652164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.652365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.652382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.660757] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.660995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.661012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.667517] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.667739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.667755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.674684] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.674885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.674902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.681336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.681538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.681555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.687657] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.688013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.688030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.693687] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.693886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.693902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.702195] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.702569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.702586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.707701] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.707899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.707916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.713927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.714125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.714142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.719699] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.719898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.719914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.724305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.724645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.724662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.729591] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.729976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.729993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.738164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.738551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.738568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.745116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.745397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.745415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.752464] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.752687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.752703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.757278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.594 [2024-06-10 10:53:51.757477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.594 [2024-06-10 10:53:51.757496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.594 [2024-06-10 10:53:51.762189] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.762390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.762406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.766546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.766745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.766762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.770925] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.771124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.771140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.775467] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.775667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.775684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.780023] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.780222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.780239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.784313] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.784512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.784528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.788414] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.788612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.788628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.792481] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.792680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.792696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.798291] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.798694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.798711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.805481] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.805720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.805737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.813123] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.813508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.813525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.819727] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.820041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.820058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.827030] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.827363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.827380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.831938] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.832136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.832152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.838132] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.838583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.838600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.847545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.847993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.848010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.855905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.856069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.856085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.865545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.865782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.865800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.595 [2024-06-10 10:53:51.874705] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.595 [2024-06-10 10:53:51.874905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.595 [2024-06-10 10:53:51.874921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.880924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.881122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.881139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.886234] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.886442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.886459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.892463] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.892660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.892676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.897461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.897660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.897676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.904406] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.904746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.904763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.912866] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.913175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.913193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.920032] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.920232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.920259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.925391] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.925700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.925717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.933737] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.933942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.933958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.943676] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.944029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.944046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.954859] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.955148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.955165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.966230] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.966622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.966639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.976738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.977212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.977229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.987763] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.988286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.988304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:51.997272] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:51.997480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:51.997497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:52.005315] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:52.005589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:52.005605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:52.015661] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:52.016041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:52.016058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:52.024997] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:52.025201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:52.025218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:52.034249] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:52.034657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:52.034673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:52.044237] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:52.044627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:52.044643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:52.053830] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:52.054146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:52.054164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:52.063719] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:52.064047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:52.064064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:52.074064] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.857 [2024-06-10 10:53:52.074451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.857 [2024-06-10 10:53:52.074468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.857 [2024-06-10 10:53:52.081718] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.858 [2024-06-10 10:53:52.081933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.858 [2024-06-10 10:53:52.081953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.858 [2024-06-10 10:53:52.089418] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.858 [2024-06-10 10:53:52.089813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.858 [2024-06-10 10:53:52.089831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.858 [2024-06-10 10:53:52.098342] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.858 [2024-06-10 10:53:52.098763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.858 [2024-06-10 10:53:52.098781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:27.858 [2024-06-10 10:53:52.106962] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.858 [2024-06-10 10:53:52.107306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.858 [2024-06-10 10:53:52.107324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.858 [2024-06-10 10:53:52.114408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.858 [2024-06-10 10:53:52.114612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.858 [2024-06-10 10:53:52.114628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:27.858 [2024-06-10 10:53:52.124420] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.858 [2024-06-10 10:53:52.124735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.858 [2024-06-10 10:53:52.124753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:27.858 [2024-06-10 10:53:52.135284] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:27.858 [2024-06-10 10:53:52.135656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.858 [2024-06-10 10:53:52.135674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.146736] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.146907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.146922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.159060] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.159578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.159595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.169672] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.170044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.170061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.179269] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.179611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.179628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.190002] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.190296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.190313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.200682] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.201020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.201038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.210644] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.210982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.211000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.220104] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.220535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.220553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.229927] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.230302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.230319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.237545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.237750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.237767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.245074] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.245464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.245481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.250539] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.250780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.250799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.258711] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.258909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.258926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.268620] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.268950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.268968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.279214] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.279467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.279484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.289755] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.290019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.290036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.169 [2024-06-10 10:53:52.301020] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.169 [2024-06-10 10:53:52.301348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.169 [2024-06-10 10:53:52.301367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.313142] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.313515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.313533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.324549] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.324817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.324835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.335884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.336366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.336387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.347710] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.348066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.348083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.359124] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.359531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.359549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.370488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.370814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.370831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.382365] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.382711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.382729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.394195] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.394601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.394618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.406386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.406730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.406747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.418756] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.419102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.419120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.428465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.428873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.428890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.439298] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.439584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.439600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.170 [2024-06-10 10:53:52.447739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.170 [2024-06-10 10:53:52.448077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.170 [2024-06-10 10:53:52.448094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.456264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.456470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.456487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.465482] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.465805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.465822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.472224] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.472507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.472524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.481202] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.481416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.481433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.489776] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.490035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.490052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.499006] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.499327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.499344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.508008] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.508303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.508321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.517386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.517681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.517698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.525702] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.525845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.525861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.535810] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.536145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.536162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.544956] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.545158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.545174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.554320] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.554524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.554540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.564231] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.564604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.564621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.574054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.574423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.574440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.583008] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.583283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.583300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.592565] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.592899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.592920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.602756] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.602982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.602999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.611792] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.612153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.612170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.621188] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.621531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.621548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.630981] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.631355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.631372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.640899] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.641274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.641291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.650685] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.651025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.651043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.661187] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.661533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.661550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.672923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.673345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.673362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.683637] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.683885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.683903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.693393] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.693785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.693803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.703909] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.704135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.704152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.431 [2024-06-10 10:53:52.713590] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.431 [2024-06-10 10:53:52.714005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.431 [2024-06-10 10:53:52.714023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.724127] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.724384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.724400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.733968] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.734357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.734375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.743176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.743385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.743402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.752415] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.752737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.752755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.761397] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.761635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.761655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.769929] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.770145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.770161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.778653] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.779021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.779038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.787652] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.787990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.788008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.795527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.795787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.795803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.802395] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.802673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.802690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.808225] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.808621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.808639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.814206] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.814441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.814459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.818780] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.818968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.818984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.823737] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.824049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.824066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.829589] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.829918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.829935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.835030] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.835376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.835393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.843129] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.843324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.843340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.850738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.850925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.850941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.858650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.858976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.858993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-06-10 10:53:52.866529] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.691 [2024-06-10 10:53:52.866718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-06-10 10:53:52.866734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.873563] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.873760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.873776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.881550] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.881890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.881908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.892095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.892371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.892388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.902215] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.902447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.902463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.912936] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.913340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.913357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.922842] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.923194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.923211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.932339] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.932634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.932652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.941670] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.941864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.941880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.951379] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.951604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.951621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.961734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.961928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.961945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.692 [2024-06-10 10:53:52.971907] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.692 [2024-06-10 10:53:52.972114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-06-10 10:53:52.972136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.952 [2024-06-10 10:53:52.982579] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.952 [2024-06-10 10:53:52.982848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.952 [2024-06-10 10:53:52.982866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:28.952 [2024-06-10 10:53:52.993506] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.952 [2024-06-10 10:53:52.993876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.952 [2024-06-10 10:53:52.993892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:28.952 [2024-06-10 10:53:53.004912] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.952 [2024-06-10 10:53:53.005218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.952 [2024-06-10 10:53:53.005235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:28.952 [2024-06-10 10:53:53.014086] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bf3b30) with pdu=0x2000190fef90 00:28:28.952 [2024-06-10 10:53:53.014390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.952 [2024-06-10 10:53:53.014407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.952 00:28:28.952 Latency(us) 00:28:28.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.952 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:28.952 nvme0n1 : 2.00 3783.25 472.91 0.00 0.00 4221.10 1870.51 15619.41 00:28:28.952 =================================================================================================================== 00:28:28.952 Total : 3783.25 472.91 0.00 0.00 4221.10 1870.51 15619.41 00:28:28.952 0 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:28.952 | .driver_specific 00:28:28.952 | .nvme_error 00:28:28.952 | .status_code 00:28:28.952 | .command_transient_transport_error' 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 244 > 0 )) 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1010809 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1010809 ']' 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1010809 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:28.952 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1010809 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1010809' 00:28:29.213 killing process with pid 1010809 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1010809 00:28:29.213 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.213 00:28:29.213 Latency(us) 00:28:29.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.213 =================================================================================================================== 00:28:29.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1010809 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1008404 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 1008404 ']' 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 1008404 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1008404 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1008404' 00:28:29.213 killing process with pid 1008404 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 1008404 00:28:29.213 [2024-06-10 10:53:53.433669] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:29.213 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 1008404 00:28:29.474 00:28:29.474 real 0m16.171s 00:28:29.474 user 0m31.720s 00:28:29.474 sys 0m3.236s 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.474 ************************************ 00:28:29.474 END TEST nvmf_digest_error 00:28:29.474 ************************************ 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:29.474 rmmod nvme_tcp 00:28:29.474 rmmod nvme_fabrics 00:28:29.474 rmmod nvme_keyring 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1008404 ']' 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1008404 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 1008404 ']' 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 1008404 00:28:29.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1008404) - No such process 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 1008404 is not found' 00:28:29.474 Process with pid 1008404 is not found 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.474 10:53:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.021 10:53:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:32.021 00:28:32.021 real 0m41.984s 00:28:32.021 user 1m5.434s 00:28:32.021 sys 0m11.915s 00:28:32.021 10:53:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:32.021 10:53:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:32.021 ************************************ 00:28:32.021 END TEST nvmf_digest 00:28:32.021 ************************************ 00:28:32.021 10:53:55 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:32.021 10:53:55 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:32.021 10:53:55 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:32.021 10:53:55 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:32.021 10:53:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:32.021 10:53:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:32.021 10:53:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:32.021 ************************************ 00:28:32.021 START TEST nvmf_bdevperf 00:28:32.021 ************************************ 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:32.021 * Looking for test storage... 00:28:32.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:32.021 10:53:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.162 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:40.163 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:40.163 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:40.163 Found net devices under 0000:31:00.0: cvl_0_0 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:40.163 Found net devices under 0000:31:00.1: cvl_0_1 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:40.163 10:54:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:40.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:40.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:28:40.163 00:28:40.163 --- 10.0.0.2 ping statistics --- 00:28:40.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.163 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:40.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:40.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:28:40.163 00:28:40.163 --- 10.0.0.1 ping statistics --- 00:28:40.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:40.163 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1015843 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1015843 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1015843 ']' 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:40.163 10:54:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.163 [2024-06-10 10:54:03.339676] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:40.163 [2024-06-10 10:54:03.339741] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.163 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.163 [2024-06-10 10:54:03.429421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:40.163 [2024-06-10 10:54:03.525584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.163 [2024-06-10 10:54:03.525645] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.163 [2024-06-10 10:54:03.525654] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.163 [2024-06-10 10:54:03.525660] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.163 [2024-06-10 10:54:03.525667] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.163 [2024-06-10 10:54:03.525806] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.163 [2024-06-10 10:54:03.525968] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.163 [2024-06-10 10:54:03.525969] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.163 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:40.163 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:28:40.163 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:40.163 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.164 [2024-06-10 10:54:04.172003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.164 Malloc0 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.164 [2024-06-10 10:54:04.236507] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:40.164 [2024-06-10 10:54:04.236713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:40.164 { 00:28:40.164 "params": { 00:28:40.164 "name": "Nvme$subsystem", 00:28:40.164 "trtype": "$TEST_TRANSPORT", 00:28:40.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.164 "adrfam": "ipv4", 00:28:40.164 "trsvcid": "$NVMF_PORT", 00:28:40.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.164 "hdgst": ${hdgst:-false}, 00:28:40.164 "ddgst": ${ddgst:-false} 00:28:40.164 }, 00:28:40.164 "method": "bdev_nvme_attach_controller" 00:28:40.164 } 00:28:40.164 EOF 00:28:40.164 )") 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:40.164 10:54:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:40.164 "params": { 00:28:40.164 "name": "Nvme1", 00:28:40.164 "trtype": "tcp", 00:28:40.164 "traddr": "10.0.0.2", 00:28:40.164 "adrfam": "ipv4", 00:28:40.164 "trsvcid": "4420", 00:28:40.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.164 "hdgst": false, 00:28:40.164 "ddgst": false 00:28:40.164 }, 00:28:40.164 "method": "bdev_nvme_attach_controller" 00:28:40.164 }' 00:28:40.164 [2024-06-10 10:54:04.288398] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:40.164 [2024-06-10 10:54:04.288447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016025 ] 00:28:40.164 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.164 [2024-06-10 10:54:04.348288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.164 [2024-06-10 10:54:04.413364] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.425 Running I/O for 1 seconds... 00:28:41.366 00:28:41.366 Latency(us) 00:28:41.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.366 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:41.366 Verification LBA range: start 0x0 length 0x4000 00:28:41.366 Nvme1n1 : 1.00 9305.97 36.35 0.00 0.00 13683.11 921.60 15728.64 00:28:41.366 =================================================================================================================== 00:28:41.366 Total : 9305.97 36.35 0.00 0.00 13683.11 921.60 15728.64 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1016357 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.627 { 00:28:41.627 "params": { 00:28:41.627 "name": "Nvme$subsystem", 00:28:41.627 "trtype": "$TEST_TRANSPORT", 00:28:41.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.627 "adrfam": "ipv4", 00:28:41.627 "trsvcid": "$NVMF_PORT", 00:28:41.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.627 "hdgst": ${hdgst:-false}, 00:28:41.627 "ddgst": ${ddgst:-false} 00:28:41.627 }, 00:28:41.627 "method": "bdev_nvme_attach_controller" 00:28:41.627 } 00:28:41.627 EOF 00:28:41.627 )") 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:41.627 10:54:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:41.627 "params": { 00:28:41.627 "name": "Nvme1", 00:28:41.627 "trtype": "tcp", 00:28:41.627 "traddr": "10.0.0.2", 00:28:41.627 "adrfam": "ipv4", 00:28:41.627 "trsvcid": "4420", 00:28:41.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.627 "hdgst": false, 00:28:41.627 "ddgst": false 00:28:41.627 }, 00:28:41.627 "method": "bdev_nvme_attach_controller" 00:28:41.627 }' 00:28:41.627 [2024-06-10 10:54:05.788222] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:41.627 [2024-06-10 10:54:05.788282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016357 ] 00:28:41.627 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.627 [2024-06-10 10:54:05.852266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.888 [2024-06-10 10:54:05.915502] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.147 Running I/O for 15 seconds... 00:28:44.692 10:54:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1015843 00:28:44.692 10:54:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:44.692 [2024-06-10 10:54:08.753554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.753987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.753995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.754005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.754013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.754025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.692 [2024-06-10 10:54:08.754034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.692 [2024-06-10 10:54:08.754046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.693 [2024-06-10 10:54:08.754702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.693 [2024-06-10 10:54:08.754712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.754987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.754997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.694 [2024-06-10 10:54:08.755445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.694 [2024-06-10 10:54:08.755455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.695 [2024-06-10 10:54:08.755628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.695 [2024-06-10 10:54:08.755860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde9f60 is same with the state(5) to be set 00:28:44.695 [2024-06-10 10:54:08.755878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.695 [2024-06-10 10:54:08.755884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.695 [2024-06-10 10:54:08.755890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74232 len:8 PRP1 0x0 PRP2 0x0 00:28:44.695 [2024-06-10 10:54:08.755899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.755940] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde9f60 was disconnected and freed. reset controller. 00:28:44.695 [2024-06-10 10:54:08.755985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.695 [2024-06-10 10:54:08.755995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.756004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.695 [2024-06-10 10:54:08.756011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.756020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.695 [2024-06-10 10:54:08.756027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.756035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.695 [2024-06-10 10:54:08.756042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.695 [2024-06-10 10:54:08.756049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.695 [2024-06-10 10:54:08.759575] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.695 [2024-06-10 10:54:08.759597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.695 [2024-06-10 10:54:08.760288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.695 [2024-06-10 10:54:08.760314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.695 [2024-06-10 10:54:08.760326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.695 [2024-06-10 10:54:08.760557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.695 [2024-06-10 10:54:08.760782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.695 [2024-06-10 10:54:08.760791] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.695 [2024-06-10 10:54:08.760800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.695 [2024-06-10 10:54:08.764358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.695 [2024-06-10 10:54:08.773778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.695 [2024-06-10 10:54:08.774476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.695 [2024-06-10 10:54:08.774516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.695 [2024-06-10 10:54:08.774527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.695 [2024-06-10 10:54:08.774768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.695 [2024-06-10 10:54:08.774992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.695 [2024-06-10 10:54:08.775002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.695 [2024-06-10 10:54:08.775010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.695 [2024-06-10 10:54:08.778572] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.696 [2024-06-10 10:54:08.787576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.696 [2024-06-10 10:54:08.788207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.696 [2024-06-10 10:54:08.788226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.696 [2024-06-10 10:54:08.788234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.696 [2024-06-10 10:54:08.788460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.696 [2024-06-10 10:54:08.788680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.696 [2024-06-10 10:54:08.788689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.696 [2024-06-10 10:54:08.788696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.696 [2024-06-10 10:54:08.792238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.696 [2024-06-10 10:54:08.801458] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.696 [2024-06-10 10:54:08.802126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.696 [2024-06-10 10:54:08.802165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.696 [2024-06-10 10:54:08.802175] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.696 [2024-06-10 10:54:08.802423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.696 [2024-06-10 10:54:08.802648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.696 [2024-06-10 10:54:08.802657] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.696 [2024-06-10 10:54:08.802665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.696 [2024-06-10 10:54:08.806227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.696 [2024-06-10 10:54:08.815450] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.696 [2024-06-10 10:54:08.816155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.696 [2024-06-10 10:54:08.816194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.696 [2024-06-10 10:54:08.816206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.696 [2024-06-10 10:54:08.816455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.696 [2024-06-10 10:54:08.816679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.696 [2024-06-10 10:54:08.816689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.696 [2024-06-10 10:54:08.816697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.696 [2024-06-10 10:54:08.820254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.696 [2024-06-10 10:54:08.829257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.696 [2024-06-10 10:54:08.829886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.696 [2024-06-10 10:54:08.829906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.696 [2024-06-10 10:54:08.829913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.696 [2024-06-10 10:54:08.830132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.696 [2024-06-10 10:54:08.830361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.696 [2024-06-10 10:54:08.830370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.696 [2024-06-10 10:54:08.830377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.696 [2024-06-10 10:54:08.833923] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.696 [2024-06-10 10:54:08.843128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.696 [2024-06-10 10:54:08.843829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.696 [2024-06-10 10:54:08.843868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.696 [2024-06-10 10:54:08.843879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.696 [2024-06-10 10:54:08.844117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.696 [2024-06-10 10:54:08.844353] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.696 [2024-06-10 10:54:08.844363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.696 [2024-06-10 10:54:08.844371] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.696 [2024-06-10 10:54:08.847920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.696 [2024-06-10 10:54:08.856919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.696 [2024-06-10 10:54:08.857651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.696 [2024-06-10 10:54:08.857690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.696 [2024-06-10 10:54:08.857705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.696 [2024-06-10 10:54:08.857943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.696 [2024-06-10 10:54:08.858167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.696 [2024-06-10 10:54:08.858176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.696 [2024-06-10 10:54:08.858184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.696 [2024-06-10 10:54:08.861748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.696 [2024-06-10 10:54:08.870754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.696 [2024-06-10 10:54:08.871375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.696 [2024-06-10 10:54:08.871396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.696 [2024-06-10 10:54:08.871404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.696 [2024-06-10 10:54:08.871625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.696 [2024-06-10 10:54:08.871844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.696 [2024-06-10 10:54:08.871853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.696 [2024-06-10 10:54:08.871860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.696 [2024-06-10 10:54:08.875413] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.696 [2024-06-10 10:54:08.884852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.696 [2024-06-10 10:54:08.885441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.696 [2024-06-10 10:54:08.885458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.696 [2024-06-10 10:54:08.885466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.696 [2024-06-10 10:54:08.885686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.696 [2024-06-10 10:54:08.885906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.696 [2024-06-10 10:54:08.885914] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.696 [2024-06-10 10:54:08.885921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.696 [2024-06-10 10:54:08.889477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.696 [2024-06-10 10:54:08.898701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.697 [2024-06-10 10:54:08.899318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.697 [2024-06-10 10:54:08.899335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.697 [2024-06-10 10:54:08.899343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.697 [2024-06-10 10:54:08.899562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.697 [2024-06-10 10:54:08.899782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.697 [2024-06-10 10:54:08.899797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.697 [2024-06-10 10:54:08.899804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.697 [2024-06-10 10:54:08.903357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.697 [2024-06-10 10:54:08.912564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.697 [2024-06-10 10:54:08.913173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.697 [2024-06-10 10:54:08.913189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.697 [2024-06-10 10:54:08.913196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.697 [2024-06-10 10:54:08.913420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.697 [2024-06-10 10:54:08.913639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.697 [2024-06-10 10:54:08.913648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.697 [2024-06-10 10:54:08.913655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.697 [2024-06-10 10:54:08.917201] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.697 [2024-06-10 10:54:08.926413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.697 [2024-06-10 10:54:08.927005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.697 [2024-06-10 10:54:08.927021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.697 [2024-06-10 10:54:08.927028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.697 [2024-06-10 10:54:08.927252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.697 [2024-06-10 10:54:08.927472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.697 [2024-06-10 10:54:08.927480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.697 [2024-06-10 10:54:08.927487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.697 [2024-06-10 10:54:08.931033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.697 [2024-06-10 10:54:08.940246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.697 [2024-06-10 10:54:08.940858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.697 [2024-06-10 10:54:08.940875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.697 [2024-06-10 10:54:08.940882] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.697 [2024-06-10 10:54:08.941100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.697 [2024-06-10 10:54:08.941326] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.697 [2024-06-10 10:54:08.941336] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.697 [2024-06-10 10:54:08.941342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.697 [2024-06-10 10:54:08.944883] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.697 [2024-06-10 10:54:08.954090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.697 [2024-06-10 10:54:08.954616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.697 [2024-06-10 10:54:08.954632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.697 [2024-06-10 10:54:08.954640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.697 [2024-06-10 10:54:08.954859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.697 [2024-06-10 10:54:08.955078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.697 [2024-06-10 10:54:08.955086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.697 [2024-06-10 10:54:08.955093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.697 [2024-06-10 10:54:08.958643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.697 [2024-06-10 10:54:08.968060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.697 [2024-06-10 10:54:08.968648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.697 [2024-06-10 10:54:08.968663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.697 [2024-06-10 10:54:08.968670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.697 [2024-06-10 10:54:08.968888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.697 [2024-06-10 10:54:08.969108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.697 [2024-06-10 10:54:08.969117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.697 [2024-06-10 10:54:08.969124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.697 [2024-06-10 10:54:08.972675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:08.981880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:08.982563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:08.982601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:08.982612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:08.982851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:08.983075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:08.983084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.959 [2024-06-10 10:54:08.983092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.959 [2024-06-10 10:54:08.986645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:08.995858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:08.996449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:08.996469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:08.996477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:08.996701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:08.996922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:08.996931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.959 [2024-06-10 10:54:08.996938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.959 [2024-06-10 10:54:09.000488] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:09.009679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:09.010366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:09.010405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:09.010415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:09.010654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:09.010878] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:09.010888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.959 [2024-06-10 10:54:09.010895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.959 [2024-06-10 10:54:09.014448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:09.023655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:09.024158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:09.024177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:09.024185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:09.024410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:09.024630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:09.024641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.959 [2024-06-10 10:54:09.024648] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.959 [2024-06-10 10:54:09.028192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:09.037614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:09.038223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:09.038239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:09.038253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:09.038471] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:09.038690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:09.038700] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.959 [2024-06-10 10:54:09.038712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.959 [2024-06-10 10:54:09.042258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:09.051461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:09.052076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:09.052091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:09.052099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:09.052322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:09.052543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:09.052551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.959 [2024-06-10 10:54:09.052558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.959 [2024-06-10 10:54:09.056103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:09.065312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:09.065984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:09.066022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:09.066033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:09.066282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:09.066507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:09.066517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.959 [2024-06-10 10:54:09.066524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.959 [2024-06-10 10:54:09.070072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:09.079313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:09.079947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:09.079965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:09.079972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:09.080192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:09.080419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:09.080429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.959 [2024-06-10 10:54:09.080436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.959 [2024-06-10 10:54:09.083985] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:09.093184] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:09.093910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:09.093950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:09.093961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:09.094199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:09.094431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:09.094442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.959 [2024-06-10 10:54:09.094450] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.959 [2024-06-10 10:54:09.097999] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.959 [2024-06-10 10:54:09.106989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.959 [2024-06-10 10:54:09.107583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.959 [2024-06-10 10:54:09.107602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.959 [2024-06-10 10:54:09.107610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.959 [2024-06-10 10:54:09.107830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.959 [2024-06-10 10:54:09.108049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.959 [2024-06-10 10:54:09.108059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.108066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.111616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.960 [2024-06-10 10:54:09.120809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.960 [2024-06-10 10:54:09.121487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.960 [2024-06-10 10:54:09.121526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.960 [2024-06-10 10:54:09.121536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.960 [2024-06-10 10:54:09.121775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.960 [2024-06-10 10:54:09.121999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.960 [2024-06-10 10:54:09.122009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.122017] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.125574] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.960 [2024-06-10 10:54:09.134625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.960 [2024-06-10 10:54:09.135342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.960 [2024-06-10 10:54:09.135380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.960 [2024-06-10 10:54:09.135392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.960 [2024-06-10 10:54:09.135632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.960 [2024-06-10 10:54:09.135860] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.960 [2024-06-10 10:54:09.135870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.135877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.139433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.960 [2024-06-10 10:54:09.148424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.960 [2024-06-10 10:54:09.149085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.960 [2024-06-10 10:54:09.149123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.960 [2024-06-10 10:54:09.149134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.960 [2024-06-10 10:54:09.149383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.960 [2024-06-10 10:54:09.149608] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.960 [2024-06-10 10:54:09.149617] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.149625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.153175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.960 [2024-06-10 10:54:09.162387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.960 [2024-06-10 10:54:09.162998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.960 [2024-06-10 10:54:09.163037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.960 [2024-06-10 10:54:09.163048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.960 [2024-06-10 10:54:09.163296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.960 [2024-06-10 10:54:09.163521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.960 [2024-06-10 10:54:09.163531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.163538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.167089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.960 [2024-06-10 10:54:09.176346] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.960 [2024-06-10 10:54:09.176975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.960 [2024-06-10 10:54:09.176994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.960 [2024-06-10 10:54:09.177002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.960 [2024-06-10 10:54:09.177222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.960 [2024-06-10 10:54:09.177448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.960 [2024-06-10 10:54:09.177459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.177466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.181018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.960 [2024-06-10 10:54:09.190224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.960 [2024-06-10 10:54:09.190845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.960 [2024-06-10 10:54:09.190863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.960 [2024-06-10 10:54:09.190870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.960 [2024-06-10 10:54:09.191089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.960 [2024-06-10 10:54:09.191314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.960 [2024-06-10 10:54:09.191324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.191331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.194886] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.960 [2024-06-10 10:54:09.204090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.960 [2024-06-10 10:54:09.204687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.960 [2024-06-10 10:54:09.204726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.960 [2024-06-10 10:54:09.204736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.960 [2024-06-10 10:54:09.204975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.960 [2024-06-10 10:54:09.205199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.960 [2024-06-10 10:54:09.205208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.205216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.208776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.960 [2024-06-10 10:54:09.217989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.960 [2024-06-10 10:54:09.218715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.960 [2024-06-10 10:54:09.218754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.960 [2024-06-10 10:54:09.218765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.960 [2024-06-10 10:54:09.219004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.960 [2024-06-10 10:54:09.219228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.960 [2024-06-10 10:54:09.219237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.219252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.222804] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.960 [2024-06-10 10:54:09.231805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.960 [2024-06-10 10:54:09.232491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.960 [2024-06-10 10:54:09.232530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:44.960 [2024-06-10 10:54:09.232545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:44.960 [2024-06-10 10:54:09.232784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:44.960 [2024-06-10 10:54:09.233009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.960 [2024-06-10 10:54:09.233019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.960 [2024-06-10 10:54:09.233026] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.960 [2024-06-10 10:54:09.236580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.223 [2024-06-10 10:54:09.245779] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.223 [2024-06-10 10:54:09.246533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.223 [2024-06-10 10:54:09.246571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.223 [2024-06-10 10:54:09.246582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.223 [2024-06-10 10:54:09.246821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.223 [2024-06-10 10:54:09.247046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.223 [2024-06-10 10:54:09.247055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.223 [2024-06-10 10:54:09.247063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.223 [2024-06-10 10:54:09.250616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.223 [2024-06-10 10:54:09.259605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.223 [2024-06-10 10:54:09.260317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.223 [2024-06-10 10:54:09.260355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.223 [2024-06-10 10:54:09.260366] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.223 [2024-06-10 10:54:09.260604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.223 [2024-06-10 10:54:09.260828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.223 [2024-06-10 10:54:09.260839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.223 [2024-06-10 10:54:09.260847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.223 [2024-06-10 10:54:09.264403] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.223 [2024-06-10 10:54:09.273411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.223 [2024-06-10 10:54:09.274036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.223 [2024-06-10 10:54:09.274054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.223 [2024-06-10 10:54:09.274062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.223 [2024-06-10 10:54:09.274288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.223 [2024-06-10 10:54:09.274509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.223 [2024-06-10 10:54:09.274522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.223 [2024-06-10 10:54:09.274529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.223 [2024-06-10 10:54:09.278077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.223 [2024-06-10 10:54:09.287317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.223 [2024-06-10 10:54:09.287966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.223 [2024-06-10 10:54:09.288005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.223 [2024-06-10 10:54:09.288016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.223 [2024-06-10 10:54:09.288263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.223 [2024-06-10 10:54:09.288488] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.223 [2024-06-10 10:54:09.288498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.223 [2024-06-10 10:54:09.288506] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.223 [2024-06-10 10:54:09.292054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.223 [2024-06-10 10:54:09.301271] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.223 [2024-06-10 10:54:09.301991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.223 [2024-06-10 10:54:09.302030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.223 [2024-06-10 10:54:09.302040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.223 [2024-06-10 10:54:09.302286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.223 [2024-06-10 10:54:09.302510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.223 [2024-06-10 10:54:09.302520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.223 [2024-06-10 10:54:09.302528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.223 [2024-06-10 10:54:09.306076] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.223 [2024-06-10 10:54:09.315067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.223 [2024-06-10 10:54:09.315699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.223 [2024-06-10 10:54:09.315718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.223 [2024-06-10 10:54:09.315726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.223 [2024-06-10 10:54:09.315945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.223 [2024-06-10 10:54:09.316165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.223 [2024-06-10 10:54:09.316173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.223 [2024-06-10 10:54:09.316181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.223 [2024-06-10 10:54:09.319731] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.223 [2024-06-10 10:54:09.328943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.223 [2024-06-10 10:54:09.329522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.223 [2024-06-10 10:54:09.329538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.223 [2024-06-10 10:54:09.329546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.223 [2024-06-10 10:54:09.329765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.223 [2024-06-10 10:54:09.329984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.223 [2024-06-10 10:54:09.329993] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.223 [2024-06-10 10:54:09.330001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.223 [2024-06-10 10:54:09.333551] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.223 [2024-06-10 10:54:09.342752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.223 [2024-06-10 10:54:09.343549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.223 [2024-06-10 10:54:09.343587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.223 [2024-06-10 10:54:09.343598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.223 [2024-06-10 10:54:09.343837] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.224 [2024-06-10 10:54:09.344062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.224 [2024-06-10 10:54:09.344071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.224 [2024-06-10 10:54:09.344079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.224 [2024-06-10 10:54:09.347633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.224 [2024-06-10 10:54:09.356629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.224 [2024-06-10 10:54:09.357212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.224 [2024-06-10 10:54:09.357231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.224 [2024-06-10 10:54:09.357239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.224 [2024-06-10 10:54:09.357466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.224 [2024-06-10 10:54:09.357686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.224 [2024-06-10 10:54:09.357694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.224 [2024-06-10 10:54:09.357701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.224 [2024-06-10 10:54:09.361240] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.224 [2024-06-10 10:54:09.370450] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.224 [2024-06-10 10:54:09.371155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.224 [2024-06-10 10:54:09.371194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.224 [2024-06-10 10:54:09.371206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.224 [2024-06-10 10:54:09.371461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.224 [2024-06-10 10:54:09.371686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.224 [2024-06-10 10:54:09.371695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.224 [2024-06-10 10:54:09.371703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.224 [2024-06-10 10:54:09.375252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.224 [2024-06-10 10:54:09.384254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.224 [2024-06-10 10:54:09.384924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.224 [2024-06-10 10:54:09.384962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.224 [2024-06-10 10:54:09.384973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.224 [2024-06-10 10:54:09.385211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.224 [2024-06-10 10:54:09.385445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.224 [2024-06-10 10:54:09.385456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.224 [2024-06-10 10:54:09.385463] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.224 [2024-06-10 10:54:09.389013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.224 [2024-06-10 10:54:09.398235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.224 [2024-06-10 10:54:09.398838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.224 [2024-06-10 10:54:09.398857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.224 [2024-06-10 10:54:09.398865] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.224 [2024-06-10 10:54:09.399084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.224 [2024-06-10 10:54:09.399310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.224 [2024-06-10 10:54:09.399319] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.224 [2024-06-10 10:54:09.399326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.224 [2024-06-10 10:54:09.402870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.224 [2024-06-10 10:54:09.412079] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.224 [2024-06-10 10:54:09.412706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.224 [2024-06-10 10:54:09.412722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.224 [2024-06-10 10:54:09.412730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.224 [2024-06-10 10:54:09.412949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.224 [2024-06-10 10:54:09.413168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.224 [2024-06-10 10:54:09.413177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.224 [2024-06-10 10:54:09.413188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.224 [2024-06-10 10:54:09.416736] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.224 [2024-06-10 10:54:09.425973] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.224 [2024-06-10 10:54:09.426555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.224 [2024-06-10 10:54:09.426572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.224 [2024-06-10 10:54:09.426580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.224 [2024-06-10 10:54:09.426800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.224 [2024-06-10 10:54:09.427020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.225 [2024-06-10 10:54:09.427029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.225 [2024-06-10 10:54:09.427036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.225 [2024-06-10 10:54:09.430585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.225 [2024-06-10 10:54:09.439793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.225 [2024-06-10 10:54:09.440484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.225 [2024-06-10 10:54:09.440523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.225 [2024-06-10 10:54:09.440534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.225 [2024-06-10 10:54:09.440773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.225 [2024-06-10 10:54:09.440996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.225 [2024-06-10 10:54:09.441006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.225 [2024-06-10 10:54:09.441014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.225 [2024-06-10 10:54:09.444564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.225 [2024-06-10 10:54:09.453768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.225 [2024-06-10 10:54:09.454349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.225 [2024-06-10 10:54:09.454368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.225 [2024-06-10 10:54:09.454376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.225 [2024-06-10 10:54:09.454596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.225 [2024-06-10 10:54:09.454815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.225 [2024-06-10 10:54:09.454824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.225 [2024-06-10 10:54:09.454832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.225 [2024-06-10 10:54:09.458411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.225 [2024-06-10 10:54:09.467610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.225 [2024-06-10 10:54:09.468324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.225 [2024-06-10 10:54:09.468362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.225 [2024-06-10 10:54:09.468373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.225 [2024-06-10 10:54:09.468612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.225 [2024-06-10 10:54:09.468836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.225 [2024-06-10 10:54:09.468846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.225 [2024-06-10 10:54:09.468853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.225 [2024-06-10 10:54:09.472407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.225 [2024-06-10 10:54:09.481600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.225 [2024-06-10 10:54:09.482320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.225 [2024-06-10 10:54:09.482358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.225 [2024-06-10 10:54:09.482369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.225 [2024-06-10 10:54:09.482607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.225 [2024-06-10 10:54:09.482832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.225 [2024-06-10 10:54:09.482841] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.225 [2024-06-10 10:54:09.482849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.225 [2024-06-10 10:54:09.486405] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.225 [2024-06-10 10:54:09.495419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.225 [2024-06-10 10:54:09.496004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.225 [2024-06-10 10:54:09.496023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.225 [2024-06-10 10:54:09.496033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.225 [2024-06-10 10:54:09.496282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.225 [2024-06-10 10:54:09.496506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.225 [2024-06-10 10:54:09.496515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.225 [2024-06-10 10:54:09.496522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.225 [2024-06-10 10:54:09.500065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.487 [2024-06-10 10:54:09.509274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.487 [2024-06-10 10:54:09.509891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.487 [2024-06-10 10:54:09.509908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.487 [2024-06-10 10:54:09.509915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.487 [2024-06-10 10:54:09.510134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.487 [2024-06-10 10:54:09.510370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.487 [2024-06-10 10:54:09.510380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.487 [2024-06-10 10:54:09.510388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.487 [2024-06-10 10:54:09.513930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.487 [2024-06-10 10:54:09.523136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.487 [2024-06-10 10:54:09.523717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.487 [2024-06-10 10:54:09.523733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.487 [2024-06-10 10:54:09.523741] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.487 [2024-06-10 10:54:09.523960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.487 [2024-06-10 10:54:09.524179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.487 [2024-06-10 10:54:09.524188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.487 [2024-06-10 10:54:09.524195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.487 [2024-06-10 10:54:09.527743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.487 [2024-06-10 10:54:09.536944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.487 [2024-06-10 10:54:09.537527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.487 [2024-06-10 10:54:09.537544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.487 [2024-06-10 10:54:09.537552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.487 [2024-06-10 10:54:09.537771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.487 [2024-06-10 10:54:09.537990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.487 [2024-06-10 10:54:09.537999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.487 [2024-06-10 10:54:09.538007] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.541556] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.550768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.551524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.551562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.551573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.551812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.552037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.552046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.552054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.555612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.564605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.565319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.565358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.565370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.565610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.565834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.565843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.565851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.569405] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.578595] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.579277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.579315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.579326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.579564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.579789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.579798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.579806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.583366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.592575] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.593276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.593315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.593326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.593564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.593789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.593798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.593806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.597369] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.606573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.607297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.607336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.607352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.607592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.607816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.607825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.607833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.611388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.620382] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.621108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.621146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.621156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.621405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.621630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.621639] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.621646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.625192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.634183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.634862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.634900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.634910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.635148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.635381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.635392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.635400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.638947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.648160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.648887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.648925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.648936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.649174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.649408] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.649422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.649429] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.652980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.661983] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.662596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.662615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.662623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.662842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.663061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.663071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.663078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.666631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.675834] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.676425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.488 [2024-06-10 10:54:09.676442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.488 [2024-06-10 10:54:09.676450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.488 [2024-06-10 10:54:09.676669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.488 [2024-06-10 10:54:09.676890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.488 [2024-06-10 10:54:09.676899] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.488 [2024-06-10 10:54:09.676906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.488 [2024-06-10 10:54:09.680456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.488 [2024-06-10 10:54:09.689665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.488 [2024-06-10 10:54:09.690277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.489 [2024-06-10 10:54:09.690293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.489 [2024-06-10 10:54:09.690300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.489 [2024-06-10 10:54:09.690519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.489 [2024-06-10 10:54:09.690738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.489 [2024-06-10 10:54:09.690747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.489 [2024-06-10 10:54:09.690754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.489 [2024-06-10 10:54:09.694309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.489 [2024-06-10 10:54:09.703555] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.489 [2024-06-10 10:54:09.704170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.489 [2024-06-10 10:54:09.704186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.489 [2024-06-10 10:54:09.704194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.489 [2024-06-10 10:54:09.704420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.489 [2024-06-10 10:54:09.704640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.489 [2024-06-10 10:54:09.704649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.489 [2024-06-10 10:54:09.704656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.489 [2024-06-10 10:54:09.708202] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.489 [2024-06-10 10:54:09.717457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.489 [2024-06-10 10:54:09.718144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.489 [2024-06-10 10:54:09.718182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.489 [2024-06-10 10:54:09.718193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.489 [2024-06-10 10:54:09.718442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.489 [2024-06-10 10:54:09.718666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.489 [2024-06-10 10:54:09.718676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.489 [2024-06-10 10:54:09.718684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.489 [2024-06-10 10:54:09.722237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.489 [2024-06-10 10:54:09.731459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.489 [2024-06-10 10:54:09.732045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.489 [2024-06-10 10:54:09.732064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.489 [2024-06-10 10:54:09.732072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.489 [2024-06-10 10:54:09.732301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.489 [2024-06-10 10:54:09.732521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.489 [2024-06-10 10:54:09.732530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.489 [2024-06-10 10:54:09.732537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.489 [2024-06-10 10:54:09.736086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.489 [2024-06-10 10:54:09.745298] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.489 [2024-06-10 10:54:09.746008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.489 [2024-06-10 10:54:09.746048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.489 [2024-06-10 10:54:09.746061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.489 [2024-06-10 10:54:09.746314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.489 [2024-06-10 10:54:09.746540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.489 [2024-06-10 10:54:09.746549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.489 [2024-06-10 10:54:09.746557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.489 [2024-06-10 10:54:09.750109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.489 [2024-06-10 10:54:09.759110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.489 [2024-06-10 10:54:09.759816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.489 [2024-06-10 10:54:09.759855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.489 [2024-06-10 10:54:09.759866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.489 [2024-06-10 10:54:09.760105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.489 [2024-06-10 10:54:09.760340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.489 [2024-06-10 10:54:09.760350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.489 [2024-06-10 10:54:09.760358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.489 [2024-06-10 10:54:09.763911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.489 [2024-06-10 10:54:09.772912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.751 [2024-06-10 10:54:09.773597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.751 [2024-06-10 10:54:09.773636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.751 [2024-06-10 10:54:09.773646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.751 [2024-06-10 10:54:09.773885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.751 [2024-06-10 10:54:09.774109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.751 [2024-06-10 10:54:09.774119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.751 [2024-06-10 10:54:09.774127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.751 [2024-06-10 10:54:09.777678] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.751 [2024-06-10 10:54:09.786876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.751 [2024-06-10 10:54:09.787588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.751 [2024-06-10 10:54:09.787627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.751 [2024-06-10 10:54:09.787637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.751 [2024-06-10 10:54:09.787876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.751 [2024-06-10 10:54:09.788101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.751 [2024-06-10 10:54:09.788110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.751 [2024-06-10 10:54:09.788123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.751 [2024-06-10 10:54:09.791763] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.751 [2024-06-10 10:54:09.800777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.751 [2024-06-10 10:54:09.801491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.751 [2024-06-10 10:54:09.801530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.751 [2024-06-10 10:54:09.801541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.751 [2024-06-10 10:54:09.801779] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.751 [2024-06-10 10:54:09.802003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.751 [2024-06-10 10:54:09.802013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.751 [2024-06-10 10:54:09.802021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.751 [2024-06-10 10:54:09.805583] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.751 [2024-06-10 10:54:09.814575] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.751 [2024-06-10 10:54:09.815201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.751 [2024-06-10 10:54:09.815219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.751 [2024-06-10 10:54:09.815227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.751 [2024-06-10 10:54:09.815455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.751 [2024-06-10 10:54:09.815675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.751 [2024-06-10 10:54:09.815684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.751 [2024-06-10 10:54:09.815691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.751 [2024-06-10 10:54:09.819233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.751 [2024-06-10 10:54:09.828424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.751 [2024-06-10 10:54:09.829039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.751 [2024-06-10 10:54:09.829055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.751 [2024-06-10 10:54:09.829062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.751 [2024-06-10 10:54:09.829286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.751 [2024-06-10 10:54:09.829506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.751 [2024-06-10 10:54:09.829515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.751 [2024-06-10 10:54:09.829522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.751 [2024-06-10 10:54:09.833060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.751 [2024-06-10 10:54:09.842250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.751 [2024-06-10 10:54:09.842866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.751 [2024-06-10 10:54:09.842881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.751 [2024-06-10 10:54:09.842889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.751 [2024-06-10 10:54:09.843107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.751 [2024-06-10 10:54:09.843332] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.751 [2024-06-10 10:54:09.843341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.751 [2024-06-10 10:54:09.843348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.751 [2024-06-10 10:54:09.846887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.751 [2024-06-10 10:54:09.856076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.751 [2024-06-10 10:54:09.856726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.856764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.856775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.857013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.857237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.857254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.857262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.860812] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.870018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.870790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.870829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.870840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.871078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.871309] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.871319] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.871327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.874873] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.884082] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.884677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.884696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.884703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.884923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.885148] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.885157] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.885164] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.888714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.897922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.898599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.898637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.898648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.898887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.899111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.899120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.899128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.902680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.911715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.912310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.912337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.912346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.912571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.912792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.912801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.912809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.916356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.925550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.926149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.926188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.926200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.926449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.926674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.926684] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.926691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.930250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.939452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.940070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.940089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.940096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.940321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.940541] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.940551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.940558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.944098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.953291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.953894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.953910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.953917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.954135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.954360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.954370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.954377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.957918] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.967103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.967679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.967695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.967703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.967921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.968141] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.968149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.968157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.971738] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.980938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.981676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.981715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.981729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.752 [2024-06-10 10:54:09.981968] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.752 [2024-06-10 10:54:09.982192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.752 [2024-06-10 10:54:09.982201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.752 [2024-06-10 10:54:09.982209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.752 [2024-06-10 10:54:09.985766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.752 [2024-06-10 10:54:09.994774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.752 [2024-06-10 10:54:09.995525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.752 [2024-06-10 10:54:09.995564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.752 [2024-06-10 10:54:09.995575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.753 [2024-06-10 10:54:09.995815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.753 [2024-06-10 10:54:09.996039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.753 [2024-06-10 10:54:09.996048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.753 [2024-06-10 10:54:09.996055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.753 [2024-06-10 10:54:09.999610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.753 [2024-06-10 10:54:10.009816] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.753 [2024-06-10 10:54:10.010344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.753 [2024-06-10 10:54:10.010369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.753 [2024-06-10 10:54:10.010382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.753 [2024-06-10 10:54:10.010672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.753 [2024-06-10 10:54:10.010963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.753 [2024-06-10 10:54:10.010976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.753 [2024-06-10 10:54:10.010988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.753 [2024-06-10 10:54:10.015628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.753 [2024-06-10 10:54:10.024929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.753 [2024-06-10 10:54:10.025410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.753 [2024-06-10 10:54:10.025431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:45.753 [2024-06-10 10:54:10.025439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:45.753 [2024-06-10 10:54:10.025661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:45.753 [2024-06-10 10:54:10.025883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.753 [2024-06-10 10:54:10.025896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.753 [2024-06-10 10:54:10.025904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.753 [2024-06-10 10:54:10.029457] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.015 [2024-06-10 10:54:10.038862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.015 [2024-06-10 10:54:10.039456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-06-10 10:54:10.039474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.015 [2024-06-10 10:54:10.039483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.015 [2024-06-10 10:54:10.039702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.015 [2024-06-10 10:54:10.039921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.015 [2024-06-10 10:54:10.039931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.015 [2024-06-10 10:54:10.039938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.015 [2024-06-10 10:54:10.043486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.015 [2024-06-10 10:54:10.052679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.015 [2024-06-10 10:54:10.053208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-06-10 10:54:10.053225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.015 [2024-06-10 10:54:10.053234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.015 [2024-06-10 10:54:10.053459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.015 [2024-06-10 10:54:10.053680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.015 [2024-06-10 10:54:10.053689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.015 [2024-06-10 10:54:10.053696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.015 [2024-06-10 10:54:10.057261] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.015 [2024-06-10 10:54:10.066466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.015 [2024-06-10 10:54:10.067155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-06-10 10:54:10.067194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.015 [2024-06-10 10:54:10.067207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.015 [2024-06-10 10:54:10.067458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.015 [2024-06-10 10:54:10.067683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.015 [2024-06-10 10:54:10.067693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.015 [2024-06-10 10:54:10.067701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.015 [2024-06-10 10:54:10.071254] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.015 [2024-06-10 10:54:10.080255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.015 [2024-06-10 10:54:10.080913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-06-10 10:54:10.080951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.015 [2024-06-10 10:54:10.080963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.015 [2024-06-10 10:54:10.081202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.015 [2024-06-10 10:54:10.081434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.015 [2024-06-10 10:54:10.081445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.015 [2024-06-10 10:54:10.081453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.015 [2024-06-10 10:54:10.085004] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.015 [2024-06-10 10:54:10.094150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.015 [2024-06-10 10:54:10.094733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-06-10 10:54:10.094754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.015 [2024-06-10 10:54:10.094762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.015 [2024-06-10 10:54:10.094983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.015 [2024-06-10 10:54:10.095203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.015 [2024-06-10 10:54:10.095212] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.015 [2024-06-10 10:54:10.095219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.098766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.107964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.108651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.108689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.108700] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.108938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.109163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.109173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.109181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.112742] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.121993] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.122535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.122554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.122562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.122787] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.123008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.123016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.123024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.126569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.135976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.136574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.136591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.136599] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.136819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.137039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.137048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.137055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.140604] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.149804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.150258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.150274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.150282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.150500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.150720] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.150729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.150736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.154282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.163686] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.164280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.164319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.164330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.164568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.164793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.164802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.164814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.168374] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.177573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.178310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.178349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.178361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.178603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.178828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.178837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.178845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.182398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.191392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.192025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.192044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.192052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.192276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.192496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.192505] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.192513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.196064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.205261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.205719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.205735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.205743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.205961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.206180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.206188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.206195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.209741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.219156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.219883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.219922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.219933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.220171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.220403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.220413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.220421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.223970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.232963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.233580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-06-10 10:54:10.233618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.016 [2024-06-10 10:54:10.233629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.016 [2024-06-10 10:54:10.233867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.016 [2024-06-10 10:54:10.234091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.016 [2024-06-10 10:54:10.234100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.016 [2024-06-10 10:54:10.234107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.016 [2024-06-10 10:54:10.237662] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.016 [2024-06-10 10:54:10.246861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.016 [2024-06-10 10:54:10.247461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-06-10 10:54:10.247482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.017 [2024-06-10 10:54:10.247489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.017 [2024-06-10 10:54:10.247709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.017 [2024-06-10 10:54:10.247929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.017 [2024-06-10 10:54:10.247938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.017 [2024-06-10 10:54:10.247945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.017 [2024-06-10 10:54:10.251494] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.017 [2024-06-10 10:54:10.260689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.017 [2024-06-10 10:54:10.261380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-06-10 10:54:10.261419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.017 [2024-06-10 10:54:10.261432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.017 [2024-06-10 10:54:10.261676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.017 [2024-06-10 10:54:10.261901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.017 [2024-06-10 10:54:10.261910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.017 [2024-06-10 10:54:10.261918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.017 [2024-06-10 10:54:10.265474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.017 [2024-06-10 10:54:10.274676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.017 [2024-06-10 10:54:10.275262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-06-10 10:54:10.275282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.017 [2024-06-10 10:54:10.275290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.017 [2024-06-10 10:54:10.275510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.017 [2024-06-10 10:54:10.275729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.017 [2024-06-10 10:54:10.275739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.017 [2024-06-10 10:54:10.275746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.017 [2024-06-10 10:54:10.279293] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.017 [2024-06-10 10:54:10.288494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.017 [2024-06-10 10:54:10.289204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-06-10 10:54:10.289250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.017 [2024-06-10 10:54:10.289263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.017 [2024-06-10 10:54:10.289502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.017 [2024-06-10 10:54:10.289727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.017 [2024-06-10 10:54:10.289736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.017 [2024-06-10 10:54:10.289744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.017 [2024-06-10 10:54:10.293296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.279 [2024-06-10 10:54:10.302304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.279 [2024-06-10 10:54:10.302932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-06-10 10:54:10.302951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.279 [2024-06-10 10:54:10.302959] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.279 [2024-06-10 10:54:10.303178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.279 [2024-06-10 10:54:10.303403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.279 [2024-06-10 10:54:10.303412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.279 [2024-06-10 10:54:10.303419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.279 [2024-06-10 10:54:10.306964] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.279 [2024-06-10 10:54:10.316162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.279 [2024-06-10 10:54:10.316774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-06-10 10:54:10.316790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.279 [2024-06-10 10:54:10.316797] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.279 [2024-06-10 10:54:10.317016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.279 [2024-06-10 10:54:10.317235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.279 [2024-06-10 10:54:10.317251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.279 [2024-06-10 10:54:10.317258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.279 [2024-06-10 10:54:10.320803] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.279 [2024-06-10 10:54:10.330032] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.279 [2024-06-10 10:54:10.330627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-06-10 10:54:10.330665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.279 [2024-06-10 10:54:10.330676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.279 [2024-06-10 10:54:10.330914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.279 [2024-06-10 10:54:10.331138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.279 [2024-06-10 10:54:10.331148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.279 [2024-06-10 10:54:10.331155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.279 [2024-06-10 10:54:10.334711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.279 [2024-06-10 10:54:10.343914] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.279 [2024-06-10 10:54:10.344593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-06-10 10:54:10.344632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.279 [2024-06-10 10:54:10.344642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.279 [2024-06-10 10:54:10.344881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.279 [2024-06-10 10:54:10.345106] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.279 [2024-06-10 10:54:10.345115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.279 [2024-06-10 10:54:10.345123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.279 [2024-06-10 10:54:10.348681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.279 [2024-06-10 10:54:10.357881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.279 [2024-06-10 10:54:10.358585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-06-10 10:54:10.358628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.279 [2024-06-10 10:54:10.358641] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.279 [2024-06-10 10:54:10.358881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.279 [2024-06-10 10:54:10.359105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.279 [2024-06-10 10:54:10.359115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.279 [2024-06-10 10:54:10.359123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.279 [2024-06-10 10:54:10.362682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.279 [2024-06-10 10:54:10.371676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.279 [2024-06-10 10:54:10.372383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.279 [2024-06-10 10:54:10.372422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.279 [2024-06-10 10:54:10.372434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.279 [2024-06-10 10:54:10.372674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.279 [2024-06-10 10:54:10.372898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.279 [2024-06-10 10:54:10.372908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.279 [2024-06-10 10:54:10.372916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.279 [2024-06-10 10:54:10.376474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.385466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.386161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.386199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.386212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.386460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.386685] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.386694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.386702] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.390256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.399260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.399859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.399878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.399886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.400105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.400338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.400349] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.400356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.403901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.413095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.413774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.413813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.413823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.414061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.414296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.414306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.414314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.417901] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.426895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.427424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.427463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.427473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.427711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.427936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.427946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.427954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.431513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.440718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.441456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.441494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.441505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.441743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.441968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.441977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.441985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.445542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.454541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.455172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.455191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.455199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.455423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.455644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.455653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.455661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.459205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.468407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.469093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.469131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.469142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.469389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.469613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.469623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.469630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.473177] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.482476] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.483095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.483113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.483121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.483347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.483569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.483578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.483585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.487131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.496343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.496920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.496936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.496948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.497167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.497393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.497403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.497410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.500952] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.510147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.280 [2024-06-10 10:54:10.510874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.280 [2024-06-10 10:54:10.510913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.280 [2024-06-10 10:54:10.510924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.280 [2024-06-10 10:54:10.511162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.280 [2024-06-10 10:54:10.511394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.280 [2024-06-10 10:54:10.511404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.280 [2024-06-10 10:54:10.511412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.280 [2024-06-10 10:54:10.514962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.280 [2024-06-10 10:54:10.523964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.281 [2024-06-10 10:54:10.524655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-06-10 10:54:10.524694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.281 [2024-06-10 10:54:10.524705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.281 [2024-06-10 10:54:10.524943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.281 [2024-06-10 10:54:10.525167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.281 [2024-06-10 10:54:10.525177] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.281 [2024-06-10 10:54:10.525184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.281 [2024-06-10 10:54:10.528743] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.281 [2024-06-10 10:54:10.537978] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.281 [2024-06-10 10:54:10.538687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-06-10 10:54:10.538726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.281 [2024-06-10 10:54:10.538736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.281 [2024-06-10 10:54:10.538975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.281 [2024-06-10 10:54:10.539200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.281 [2024-06-10 10:54:10.539210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.281 [2024-06-10 10:54:10.539222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.281 [2024-06-10 10:54:10.542780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.281 [2024-06-10 10:54:10.551979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.281 [2024-06-10 10:54:10.552711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.281 [2024-06-10 10:54:10.552749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.281 [2024-06-10 10:54:10.552760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.281 [2024-06-10 10:54:10.552998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.281 [2024-06-10 10:54:10.553222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.281 [2024-06-10 10:54:10.553232] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.281 [2024-06-10 10:54:10.553239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.281 [2024-06-10 10:54:10.556796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.543 [2024-06-10 10:54:10.565792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.543 [2024-06-10 10:54:10.566485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.543 [2024-06-10 10:54:10.566524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.543 [2024-06-10 10:54:10.566534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.543 [2024-06-10 10:54:10.566773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.543 [2024-06-10 10:54:10.566996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.543 [2024-06-10 10:54:10.567006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.543 [2024-06-10 10:54:10.567014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.543 [2024-06-10 10:54:10.570570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.543 [2024-06-10 10:54:10.579766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.543 [2024-06-10 10:54:10.580545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.543 [2024-06-10 10:54:10.580584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.543 [2024-06-10 10:54:10.580595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.543 [2024-06-10 10:54:10.580834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.543 [2024-06-10 10:54:10.581058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.543 [2024-06-10 10:54:10.581068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.543 [2024-06-10 10:54:10.581075] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.543 [2024-06-10 10:54:10.584628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.543 [2024-06-10 10:54:10.593619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.543 [2024-06-10 10:54:10.594352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.543 [2024-06-10 10:54:10.594390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.543 [2024-06-10 10:54:10.594402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.543 [2024-06-10 10:54:10.594644] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.543 [2024-06-10 10:54:10.594868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.543 [2024-06-10 10:54:10.594878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.543 [2024-06-10 10:54:10.594885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.543 [2024-06-10 10:54:10.598447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.543 [2024-06-10 10:54:10.607447] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.543 [2024-06-10 10:54:10.608157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.543 [2024-06-10 10:54:10.608196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.543 [2024-06-10 10:54:10.608208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.543 [2024-06-10 10:54:10.608456] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.543 [2024-06-10 10:54:10.608681] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.543 [2024-06-10 10:54:10.608691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.543 [2024-06-10 10:54:10.608698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.543 [2024-06-10 10:54:10.612248] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.543 [2024-06-10 10:54:10.621448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.543 [2024-06-10 10:54:10.622036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.543 [2024-06-10 10:54:10.622054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.543 [2024-06-10 10:54:10.622062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.543 [2024-06-10 10:54:10.622287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.543 [2024-06-10 10:54:10.622508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.543 [2024-06-10 10:54:10.622517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.543 [2024-06-10 10:54:10.622524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.543 [2024-06-10 10:54:10.626067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.543 [2024-06-10 10:54:10.635266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.543 [2024-06-10 10:54:10.635986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.543 [2024-06-10 10:54:10.636025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.543 [2024-06-10 10:54:10.636037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.543 [2024-06-10 10:54:10.636290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.543 [2024-06-10 10:54:10.636515] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.543 [2024-06-10 10:54:10.636524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.543 [2024-06-10 10:54:10.636532] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.543 [2024-06-10 10:54:10.640082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.543 [2024-06-10 10:54:10.649077] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.543 [2024-06-10 10:54:10.649765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.543 [2024-06-10 10:54:10.649803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.649814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.650052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.650284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.650294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.650302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.653852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.663052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.663770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.663809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.663820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.664058] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.664290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.664300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.664308] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.667854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.676846] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.677568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.677606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.677617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.677858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.678082] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.678092] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.678104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.681663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.690654] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.691352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.691390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.691402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.691643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.691867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.691877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.691884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.695447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.704650] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.705383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.705422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.705434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.705673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.705898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.705907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.705914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.709472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.718463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.719080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.719098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.719106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.719331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.719551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.719560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.719567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.723106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.732296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.732940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.732983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.732994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.733232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.733464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.733475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.733482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.737039] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.746270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.746953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.746992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.747003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.747250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.747476] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.747486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.747494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.751041] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.760236] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.760920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.760960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.760971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.761211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.761445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.761457] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.761464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.765010] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.774215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.774935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.774973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.774984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.775223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.544 [2024-06-10 10:54:10.775460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.544 [2024-06-10 10:54:10.775471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.544 [2024-06-10 10:54:10.775478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.544 [2024-06-10 10:54:10.779024] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.544 [2024-06-10 10:54:10.788013] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.544 [2024-06-10 10:54:10.788598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.544 [2024-06-10 10:54:10.788636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.544 [2024-06-10 10:54:10.788646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.544 [2024-06-10 10:54:10.788885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.545 [2024-06-10 10:54:10.789109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.545 [2024-06-10 10:54:10.789118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.545 [2024-06-10 10:54:10.789126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.545 [2024-06-10 10:54:10.792682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.545 [2024-06-10 10:54:10.801891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.545 [2024-06-10 10:54:10.802599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.545 [2024-06-10 10:54:10.802637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.545 [2024-06-10 10:54:10.802648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.545 [2024-06-10 10:54:10.802886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.545 [2024-06-10 10:54:10.803110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.545 [2024-06-10 10:54:10.803120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.545 [2024-06-10 10:54:10.803127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.545 [2024-06-10 10:54:10.806680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.545 [2024-06-10 10:54:10.815872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.545 [2024-06-10 10:54:10.816560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.545 [2024-06-10 10:54:10.816599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.545 [2024-06-10 10:54:10.816609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.545 [2024-06-10 10:54:10.816847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.545 [2024-06-10 10:54:10.817072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.545 [2024-06-10 10:54:10.817081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.545 [2024-06-10 10:54:10.817088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.545 [2024-06-10 10:54:10.820645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.807 [2024-06-10 10:54:10.829757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.807 [2024-06-10 10:54:10.830479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.807 [2024-06-10 10:54:10.830517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.807 [2024-06-10 10:54:10.830528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.807 [2024-06-10 10:54:10.830767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.807 [2024-06-10 10:54:10.830992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.807 [2024-06-10 10:54:10.831001] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.807 [2024-06-10 10:54:10.831009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.807 [2024-06-10 10:54:10.834567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.807 [2024-06-10 10:54:10.843558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.807 [2024-06-10 10:54:10.844228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.807 [2024-06-10 10:54:10.844273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.807 [2024-06-10 10:54:10.844284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.807 [2024-06-10 10:54:10.844523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.807 [2024-06-10 10:54:10.844747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.807 [2024-06-10 10:54:10.844756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.807 [2024-06-10 10:54:10.844764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.807 [2024-06-10 10:54:10.848317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.807 [2024-06-10 10:54:10.857510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.807 [2024-06-10 10:54:10.858232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.807 [2024-06-10 10:54:10.858276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.807 [2024-06-10 10:54:10.858287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.807 [2024-06-10 10:54:10.858525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.807 [2024-06-10 10:54:10.858749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.807 [2024-06-10 10:54:10.858758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.807 [2024-06-10 10:54:10.858766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.807 [2024-06-10 10:54:10.862316] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.807 [2024-06-10 10:54:10.871305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.807 [2024-06-10 10:54:10.871984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.807 [2024-06-10 10:54:10.872022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.807 [2024-06-10 10:54:10.872036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.807 [2024-06-10 10:54:10.872284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.807 [2024-06-10 10:54:10.872508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.807 [2024-06-10 10:54:10.872518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.807 [2024-06-10 10:54:10.872526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.807 [2024-06-10 10:54:10.876071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.807 [2024-06-10 10:54:10.885494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.807 [2024-06-10 10:54:10.886107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.807 [2024-06-10 10:54:10.886144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.807 [2024-06-10 10:54:10.886155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.807 [2024-06-10 10:54:10.886403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.807 [2024-06-10 10:54:10.886628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.807 [2024-06-10 10:54:10.886637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.807 [2024-06-10 10:54:10.886644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.807 [2024-06-10 10:54:10.890190] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.807 [2024-06-10 10:54:10.899400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.807 [2024-06-10 10:54:10.900089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.807 [2024-06-10 10:54:10.900127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.807 [2024-06-10 10:54:10.900138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.807 [2024-06-10 10:54:10.900386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.807 [2024-06-10 10:54:10.900611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.807 [2024-06-10 10:54:10.900621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.807 [2024-06-10 10:54:10.900628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.807 [2024-06-10 10:54:10.904173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.807 [2024-06-10 10:54:10.913373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.807 [2024-06-10 10:54:10.914091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.807 [2024-06-10 10:54:10.914130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.807 [2024-06-10 10:54:10.914140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.807 [2024-06-10 10:54:10.914388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.807 [2024-06-10 10:54:10.914613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.807 [2024-06-10 10:54:10.914630] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:10.914638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:10.918188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:10.927178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:10.927805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:10.927824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:10.927832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:10.928051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:10.928278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:10.928288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:10.928295] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:10.931836] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:10.941022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:10.941679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:10.941717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:10.941728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:10.941967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:10.942191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:10.942200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:10.942208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:10.945774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:10.955015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:10.955658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:10.955677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:10.955685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:10.955905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:10.956125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:10.956134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:10.956141] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:10.959686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:10.968885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:10.969527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:10.969565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:10.969576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:10.969815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:10.970039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:10.970048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:10.970056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:10.973613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:10.982807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:10.983531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:10.983569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:10.983580] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:10.983818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:10.984041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:10.984051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:10.984059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:10.987615] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:10.996622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:10.997328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:10.997366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:10.997376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:10.997614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:10.997838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:10.997847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:10.997855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:11.001408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:11.010600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:11.011318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:11.011356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:11.011367] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:11.011610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:11.011834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:11.011844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:11.011851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:11.015409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:11.024399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:11.025113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:11.025151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:11.025162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:11.025408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:11.025633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:11.025643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:11.025650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:11.029197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:11.038190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:11.038850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:11.038889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:11.038899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:11.039137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:11.039370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:11.039380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:11.039388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.808 [2024-06-10 10:54:11.042935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.808 [2024-06-10 10:54:11.052141] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.808 [2024-06-10 10:54:11.052869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.808 [2024-06-10 10:54:11.052907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.808 [2024-06-10 10:54:11.052917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.808 [2024-06-10 10:54:11.053155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.808 [2024-06-10 10:54:11.053388] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.808 [2024-06-10 10:54:11.053399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.808 [2024-06-10 10:54:11.053410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.809 [2024-06-10 10:54:11.056957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.809 [2024-06-10 10:54:11.065943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.809 [2024-06-10 10:54:11.066624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.809 [2024-06-10 10:54:11.066663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.809 [2024-06-10 10:54:11.066674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.809 [2024-06-10 10:54:11.066912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.809 [2024-06-10 10:54:11.067136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.809 [2024-06-10 10:54:11.067146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.809 [2024-06-10 10:54:11.067153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.809 [2024-06-10 10:54:11.070708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.809 [2024-06-10 10:54:11.079903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.809 [2024-06-10 10:54:11.080576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.809 [2024-06-10 10:54:11.080614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:46.809 [2024-06-10 10:54:11.080625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:46.809 [2024-06-10 10:54:11.080863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:46.809 [2024-06-10 10:54:11.081088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.809 [2024-06-10 10:54:11.081097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.809 [2024-06-10 10:54:11.081105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.809 [2024-06-10 10:54:11.084659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.071 [2024-06-10 10:54:11.093859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.071 [2024-06-10 10:54:11.094544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.071 [2024-06-10 10:54:11.094583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.071 [2024-06-10 10:54:11.094594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.071 [2024-06-10 10:54:11.094832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.071 [2024-06-10 10:54:11.095057] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.071 [2024-06-10 10:54:11.095066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.071 [2024-06-10 10:54:11.095074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.071 [2024-06-10 10:54:11.098641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.071 [2024-06-10 10:54:11.107839] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.071 [2024-06-10 10:54:11.108379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.071 [2024-06-10 10:54:11.108420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.071 [2024-06-10 10:54:11.108431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.071 [2024-06-10 10:54:11.108669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.071 [2024-06-10 10:54:11.108893] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.071 [2024-06-10 10:54:11.108902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.071 [2024-06-10 10:54:11.108910] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.071 [2024-06-10 10:54:11.112464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.071 [2024-06-10 10:54:11.121663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.071 [2024-06-10 10:54:11.122342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.071 [2024-06-10 10:54:11.122380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.071 [2024-06-10 10:54:11.122392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.071 [2024-06-10 10:54:11.122632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.071 [2024-06-10 10:54:11.122857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.071 [2024-06-10 10:54:11.122866] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.071 [2024-06-10 10:54:11.122873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.071 [2024-06-10 10:54:11.126429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.071 [2024-06-10 10:54:11.135623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.071 [2024-06-10 10:54:11.136340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.071 [2024-06-10 10:54:11.136379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.071 [2024-06-10 10:54:11.136391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.071 [2024-06-10 10:54:11.136631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.071 [2024-06-10 10:54:11.136856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.071 [2024-06-10 10:54:11.136865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.071 [2024-06-10 10:54:11.136873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.071 [2024-06-10 10:54:11.140426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.071 [2024-06-10 10:54:11.149415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.071 [2024-06-10 10:54:11.150136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.071 [2024-06-10 10:54:11.150174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.071 [2024-06-10 10:54:11.150185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.071 [2024-06-10 10:54:11.150431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.071 [2024-06-10 10:54:11.150661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.071 [2024-06-10 10:54:11.150671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.071 [2024-06-10 10:54:11.150679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.071 [2024-06-10 10:54:11.154231] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.071 [2024-06-10 10:54:11.163258] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.071 [2024-06-10 10:54:11.163960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.071 [2024-06-10 10:54:11.163999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.071 [2024-06-10 10:54:11.164009] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.071 [2024-06-10 10:54:11.164255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.071 [2024-06-10 10:54:11.164480] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.071 [2024-06-10 10:54:11.164491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.071 [2024-06-10 10:54:11.164499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.071 [2024-06-10 10:54:11.168048] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.071 [2024-06-10 10:54:11.177256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.071 [2024-06-10 10:54:11.177975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.071 [2024-06-10 10:54:11.178013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.071 [2024-06-10 10:54:11.178024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.178269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.178495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.178504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.178512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.182059] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.191052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.191732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.191771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.191781] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.192019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.192253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.192263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.192271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.195828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.205033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.205703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.205742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.205752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.205990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.206215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.206225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.206232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.209785] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.218981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.219656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.219695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.219706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.219944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.220169] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.220178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.220185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.223741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.232938] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.233524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.233543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.233551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.233771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.233990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.233999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.234006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.237551] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.246746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.247355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.247373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.247385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.247604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.247823] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.247832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.247839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.251385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.260577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.261278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.261316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.261327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.261565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.261790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.261799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.261807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.265362] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.274564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.275282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.275320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.275330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.275569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.275793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.275802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.275810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.279364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.288357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.289029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.289067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.289078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.289328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.289553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.289567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.289575] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.293126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.302343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.303016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.303054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.303065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.303313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.303538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.072 [2024-06-10 10:54:11.303548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.072 [2024-06-10 10:54:11.303556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.072 [2024-06-10 10:54:11.307104] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.072 [2024-06-10 10:54:11.316309] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.072 [2024-06-10 10:54:11.317011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.072 [2024-06-10 10:54:11.317050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.072 [2024-06-10 10:54:11.317060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.072 [2024-06-10 10:54:11.317308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.072 [2024-06-10 10:54:11.317534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.073 [2024-06-10 10:54:11.317544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.073 [2024-06-10 10:54:11.317551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.073 [2024-06-10 10:54:11.321096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.073 [2024-06-10 10:54:11.330291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.073 [2024-06-10 10:54:11.330984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.073 [2024-06-10 10:54:11.331022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.073 [2024-06-10 10:54:11.331033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.073 [2024-06-10 10:54:11.331282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.073 [2024-06-10 10:54:11.331507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.073 [2024-06-10 10:54:11.331516] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.073 [2024-06-10 10:54:11.331524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.073 [2024-06-10 10:54:11.335072] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.073 [2024-06-10 10:54:11.344274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.073 [2024-06-10 10:54:11.344955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.073 [2024-06-10 10:54:11.344994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.073 [2024-06-10 10:54:11.345004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.073 [2024-06-10 10:54:11.345252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.073 [2024-06-10 10:54:11.345478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.073 [2024-06-10 10:54:11.345488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.073 [2024-06-10 10:54:11.345495] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.073 [2024-06-10 10:54:11.349040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.335 [2024-06-10 10:54:11.358248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.335 [2024-06-10 10:54:11.358924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.335 [2024-06-10 10:54:11.358962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.335 [2024-06-10 10:54:11.358973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.335 [2024-06-10 10:54:11.359211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.335 [2024-06-10 10:54:11.359449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.335 [2024-06-10 10:54:11.359460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.335 [2024-06-10 10:54:11.359468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.335 [2024-06-10 10:54:11.363016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.335 [2024-06-10 10:54:11.372244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.335 [2024-06-10 10:54:11.372865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.335 [2024-06-10 10:54:11.372884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.335 [2024-06-10 10:54:11.372891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.335 [2024-06-10 10:54:11.373111] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.335 [2024-06-10 10:54:11.373340] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.335 [2024-06-10 10:54:11.373350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.335 [2024-06-10 10:54:11.373357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.335 [2024-06-10 10:54:11.376900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.335 [2024-06-10 10:54:11.386095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.335 [2024-06-10 10:54:11.386677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.335 [2024-06-10 10:54:11.386693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.335 [2024-06-10 10:54:11.386701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.335 [2024-06-10 10:54:11.386924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.335 [2024-06-10 10:54:11.387144] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.335 [2024-06-10 10:54:11.387152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.335 [2024-06-10 10:54:11.387159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.335 [2024-06-10 10:54:11.390708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.335 [2024-06-10 10:54:11.399908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.335 [2024-06-10 10:54:11.400475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.335 [2024-06-10 10:54:11.400492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.335 [2024-06-10 10:54:11.400500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.335 [2024-06-10 10:54:11.400719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.335 [2024-06-10 10:54:11.400939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.335 [2024-06-10 10:54:11.400947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.335 [2024-06-10 10:54:11.400954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.335 [2024-06-10 10:54:11.404499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.335 [2024-06-10 10:54:11.413896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.335 [2024-06-10 10:54:11.414602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.335 [2024-06-10 10:54:11.414641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.335 [2024-06-10 10:54:11.414651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.335 [2024-06-10 10:54:11.414889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.335 [2024-06-10 10:54:11.415114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.335 [2024-06-10 10:54:11.415124] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.335 [2024-06-10 10:54:11.415131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.335 [2024-06-10 10:54:11.418687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.335 [2024-06-10 10:54:11.427896] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.335 [2024-06-10 10:54:11.428573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.335 [2024-06-10 10:54:11.428611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.335 [2024-06-10 10:54:11.428622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.335 [2024-06-10 10:54:11.428859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.335 [2024-06-10 10:54:11.429083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.335 [2024-06-10 10:54:11.429093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.335 [2024-06-10 10:54:11.429104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.335 [2024-06-10 10:54:11.432658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.335 [2024-06-10 10:54:11.441885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.335 [2024-06-10 10:54:11.442599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.335 [2024-06-10 10:54:11.442638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.335 [2024-06-10 10:54:11.442649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.335 [2024-06-10 10:54:11.442888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.335 [2024-06-10 10:54:11.443112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.335 [2024-06-10 10:54:11.443122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.335 [2024-06-10 10:54:11.443129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.335 [2024-06-10 10:54:11.446682] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.335 [2024-06-10 10:54:11.455881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.335 [2024-06-10 10:54:11.456588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.335 [2024-06-10 10:54:11.456626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.335 [2024-06-10 10:54:11.456636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.335 [2024-06-10 10:54:11.456875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.335 [2024-06-10 10:54:11.457100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.335 [2024-06-10 10:54:11.457110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.335 [2024-06-10 10:54:11.457118] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.335 [2024-06-10 10:54:11.460672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.335 [2024-06-10 10:54:11.469875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.335 [2024-06-10 10:54:11.470467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.335 [2024-06-10 10:54:11.470487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.335 [2024-06-10 10:54:11.470495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.335 [2024-06-10 10:54:11.470715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.335 [2024-06-10 10:54:11.470935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.470943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.470950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.474499] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.483692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.484287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.484315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.484323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.484547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.484768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.484776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.484783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.488339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.497540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.498180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.498218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.498229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.498476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.498701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.498711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.498718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.502273] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.511480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.512161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.512200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.512211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.512458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.512683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.512693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.512700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.516252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.525460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.526179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.526217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.526229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.526477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.526705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.526716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.526723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.530272] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.539260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.539956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.539994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.540004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.540253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.540479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.540488] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.540496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.544043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.553250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.553968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.554006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.554016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.554265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.554490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.554500] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.554507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.558054] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.567042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.567731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.567769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.567780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.568019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.568252] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.568263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.568270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.571815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.580840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.581527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.581565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.581576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.581814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.582038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.582048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.582055] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.585610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.594805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.595487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.595525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.595535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.595773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.595997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.596006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.596014] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.599579] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.336 [2024-06-10 10:54:11.608771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.336 [2024-06-10 10:54:11.609394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.336 [2024-06-10 10:54:11.609413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.336 [2024-06-10 10:54:11.609420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.336 [2024-06-10 10:54:11.609640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.336 [2024-06-10 10:54:11.609859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.336 [2024-06-10 10:54:11.609868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.336 [2024-06-10 10:54:11.609876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.336 [2024-06-10 10:54:11.613419] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.599 [2024-06-10 10:54:11.622615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.599 [2024-06-10 10:54:11.623226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.599 [2024-06-10 10:54:11.623248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.599 [2024-06-10 10:54:11.623264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.599 [2024-06-10 10:54:11.623483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.599 [2024-06-10 10:54:11.623702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.599 [2024-06-10 10:54:11.623710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.599 [2024-06-10 10:54:11.623717] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.599 [2024-06-10 10:54:11.627262] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.599 [2024-06-10 10:54:11.636461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.599 [2024-06-10 10:54:11.637086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.599 [2024-06-10 10:54:11.637125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.599 [2024-06-10 10:54:11.637135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.599 [2024-06-10 10:54:11.637384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.599 [2024-06-10 10:54:11.637609] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.599 [2024-06-10 10:54:11.637619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.599 [2024-06-10 10:54:11.637626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.599 [2024-06-10 10:54:11.641172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.599 [2024-06-10 10:54:11.650364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.599 [2024-06-10 10:54:11.651075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.599 [2024-06-10 10:54:11.651114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.599 [2024-06-10 10:54:11.651124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.599 [2024-06-10 10:54:11.651372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.599 [2024-06-10 10:54:11.651597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.599 [2024-06-10 10:54:11.651606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.599 [2024-06-10 10:54:11.651614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.599 [2024-06-10 10:54:11.655158] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.599 [2024-06-10 10:54:11.664356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.599 [2024-06-10 10:54:11.665075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.599 [2024-06-10 10:54:11.665113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.599 [2024-06-10 10:54:11.665124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.599 [2024-06-10 10:54:11.665372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.599 [2024-06-10 10:54:11.665597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.599 [2024-06-10 10:54:11.665611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.599 [2024-06-10 10:54:11.665619] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.599 [2024-06-10 10:54:11.669167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.599 [2024-06-10 10:54:11.678159] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.599 [2024-06-10 10:54:11.678873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.599 [2024-06-10 10:54:11.678912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.599 [2024-06-10 10:54:11.678923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.599 [2024-06-10 10:54:11.679161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.599 [2024-06-10 10:54:11.679396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.599 [2024-06-10 10:54:11.679407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.599 [2024-06-10 10:54:11.679414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.599 [2024-06-10 10:54:11.682960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.599 [2024-06-10 10:54:11.692154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.599 [2024-06-10 10:54:11.692855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.599 [2024-06-10 10:54:11.692893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.599 [2024-06-10 10:54:11.692904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.599 [2024-06-10 10:54:11.693142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.599 [2024-06-10 10:54:11.693376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.599 [2024-06-10 10:54:11.693386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.599 [2024-06-10 10:54:11.693394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.599 [2024-06-10 10:54:11.696953] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.599 [2024-06-10 10:54:11.705939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.599 [2024-06-10 10:54:11.706667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.599 [2024-06-10 10:54:11.706706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.599 [2024-06-10 10:54:11.706717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.599 [2024-06-10 10:54:11.706955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.599 [2024-06-10 10:54:11.707180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.599 [2024-06-10 10:54:11.707189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.599 [2024-06-10 10:54:11.707197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.599 [2024-06-10 10:54:11.710754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.599 [2024-06-10 10:54:11.719746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.599 [2024-06-10 10:54:11.720263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.599 [2024-06-10 10:54:11.720282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.599 [2024-06-10 10:54:11.720290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.599 [2024-06-10 10:54:11.720510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.599 [2024-06-10 10:54:11.720729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.599 [2024-06-10 10:54:11.720739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.599 [2024-06-10 10:54:11.720746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.599 [2024-06-10 10:54:11.724290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.599 [2024-06-10 10:54:11.733692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.599 [2024-06-10 10:54:11.734235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.599 [2024-06-10 10:54:11.734281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.599 [2024-06-10 10:54:11.734292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.599 [2024-06-10 10:54:11.734530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.600 [2024-06-10 10:54:11.734754] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.600 [2024-06-10 10:54:11.734763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.600 [2024-06-10 10:54:11.734770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.600 [2024-06-10 10:54:11.738321] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.600 [2024-06-10 10:54:11.747525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.600 [2024-06-10 10:54:11.748252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.600 [2024-06-10 10:54:11.748290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.600 [2024-06-10 10:54:11.748300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.600 [2024-06-10 10:54:11.748539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.600 [2024-06-10 10:54:11.748762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.600 [2024-06-10 10:54:11.748772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.600 [2024-06-10 10:54:11.748779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1015843 Killed "${NVMF_APP[@]}" "$@" 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.600 [2024-06-10 10:54:11.752331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1017832 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1017832 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 1017832 ']' 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:47.600 [2024-06-10 10:54:11.761351] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.600 10:54:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.600 [2024-06-10 10:54:11.761937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.600 [2024-06-10 10:54:11.761954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.600 [2024-06-10 10:54:11.761962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.600 [2024-06-10 10:54:11.762181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.600 [2024-06-10 10:54:11.762407] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.600 [2024-06-10 10:54:11.762416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.600 [2024-06-10 10:54:11.762423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.600 [2024-06-10 10:54:11.765973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.600 [2024-06-10 10:54:11.775182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.600 [2024-06-10 10:54:11.775809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.600 [2024-06-10 10:54:11.775825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.600 [2024-06-10 10:54:11.775832] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.600 [2024-06-10 10:54:11.776051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.600 [2024-06-10 10:54:11.776277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.600 [2024-06-10 10:54:11.776286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.600 [2024-06-10 10:54:11.776294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.600 [2024-06-10 10:54:11.779844] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.600 [2024-06-10 10:54:11.789086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.600 [2024-06-10 10:54:11.789689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.600 [2024-06-10 10:54:11.789706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.600 [2024-06-10 10:54:11.789713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.600 [2024-06-10 10:54:11.789932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.600 [2024-06-10 10:54:11.790154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.600 [2024-06-10 10:54:11.790162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.600 [2024-06-10 10:54:11.790168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.600 [2024-06-10 10:54:11.793720] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.600 [2024-06-10 10:54:11.802937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.600 [2024-06-10 10:54:11.803620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.600 [2024-06-10 10:54:11.803658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.600 [2024-06-10 10:54:11.803669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.600 [2024-06-10 10:54:11.803908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.600 [2024-06-10 10:54:11.804132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.600 [2024-06-10 10:54:11.804140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.600 [2024-06-10 10:54:11.804148] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.600 [2024-06-10 10:54:11.807705] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.600 [2024-06-10 10:54:11.809282] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:28:47.600 [2024-06-10 10:54:11.809317] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.600 [2024-06-10 10:54:11.816933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.600 [2024-06-10 10:54:11.817448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.600 [2024-06-10 10:54:11.817466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.600 [2024-06-10 10:54:11.817474] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.600 [2024-06-10 10:54:11.817694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.600 [2024-06-10 10:54:11.817912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.600 [2024-06-10 10:54:11.817921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.600 [2024-06-10 10:54:11.817928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.600 [2024-06-10 10:54:11.821486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.600 [2024-06-10 10:54:11.830887] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.600 [2024-06-10 10:54:11.831596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.600 [2024-06-10 10:54:11.831634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.600 [2024-06-10 10:54:11.831645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.600 [2024-06-10 10:54:11.831883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.600 [2024-06-10 10:54:11.832110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.600 [2024-06-10 10:54:11.832119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.600 [2024-06-10 10:54:11.832127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.600 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.600 [2024-06-10 10:54:11.835686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.600 [2024-06-10 10:54:11.844682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.600 [2024-06-10 10:54:11.845275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.600 [2024-06-10 10:54:11.845312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.600 [2024-06-10 10:54:11.845323] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.600 [2024-06-10 10:54:11.845561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.600 [2024-06-10 10:54:11.845784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.600 [2024-06-10 10:54:11.845793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.600 [2024-06-10 10:54:11.845801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.600 [2024-06-10 10:54:11.849356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.600 [2024-06-10 10:54:11.857733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:47.601 [2024-06-10 10:54:11.858655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.601 [2024-06-10 10:54:11.859344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.601 [2024-06-10 10:54:11.859382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.601 [2024-06-10 10:54:11.859394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.601 [2024-06-10 10:54:11.859637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.601 [2024-06-10 10:54:11.859861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.601 [2024-06-10 10:54:11.859870] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.601 [2024-06-10 10:54:11.859877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.601 [2024-06-10 10:54:11.863441] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.601 [2024-06-10 10:54:11.872655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.601 [2024-06-10 10:54:11.873455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.601 [2024-06-10 10:54:11.873494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.601 [2024-06-10 10:54:11.873506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.601 [2024-06-10 10:54:11.873745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.601 [2024-06-10 10:54:11.873969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.601 [2024-06-10 10:54:11.873977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.601 [2024-06-10 10:54:11.873985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.601 [2024-06-10 10:54:11.877547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.863 [2024-06-10 10:54:11.886578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.863 [2024-06-10 10:54:11.887185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.863 [2024-06-10 10:54:11.887204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.863 [2024-06-10 10:54:11.887212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.863 [2024-06-10 10:54:11.887439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.863 [2024-06-10 10:54:11.887659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.863 [2024-06-10 10:54:11.887667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.863 [2024-06-10 10:54:11.887674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.863 [2024-06-10 10:54:11.891270] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.863 [2024-06-10 10:54:11.900486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.863 [2024-06-10 10:54:11.901205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.863 [2024-06-10 10:54:11.901251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.863 [2024-06-10 10:54:11.901263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.863 [2024-06-10 10:54:11.901504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.863 [2024-06-10 10:54:11.901727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.863 [2024-06-10 10:54:11.901736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.863 [2024-06-10 10:54:11.901743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.863 [2024-06-10 10:54:11.905297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.863 [2024-06-10 10:54:11.911912] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.863 [2024-06-10 10:54:11.911935] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.863 [2024-06-10 10:54:11.911942] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.863 [2024-06-10 10:54:11.911947] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.863 [2024-06-10 10:54:11.911951] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.863 [2024-06-10 10:54:11.912072] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.863 [2024-06-10 10:54:11.912228] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.863 [2024-06-10 10:54:11.912230] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.863 [2024-06-10 10:54:11.914294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.863 [2024-06-10 10:54:11.914992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.863 [2024-06-10 10:54:11.915031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.863 [2024-06-10 10:54:11.915042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.863 [2024-06-10 10:54:11.915289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.863 [2024-06-10 10:54:11.915519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.863 [2024-06-10 10:54:11.915527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.863 [2024-06-10 10:54:11.915535] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.863 [2024-06-10 10:54:11.919083] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.863 [2024-06-10 10:54:11.928294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.863 [2024-06-10 10:54:11.929011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.863 [2024-06-10 10:54:11.929050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.863 [2024-06-10 10:54:11.929061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.863 [2024-06-10 10:54:11.929308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.863 [2024-06-10 10:54:11.929532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.863 [2024-06-10 10:54:11.929540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.863 [2024-06-10 10:54:11.929548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.863 [2024-06-10 10:54:11.933097] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.863 [2024-06-10 10:54:11.942090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:11.942618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:11.942637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:11.942645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:11.942866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:11.943085] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:11.943092] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:11.943099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:11.946647] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:11.956050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:11.956565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:11.956604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:11.956615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:11.956854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:11.957078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:11.957087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:11.957094] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:11.960652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:11.969853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:11.970566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:11.970603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:11.970614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:11.970852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:11.971075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:11.971083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:11.971091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:11.974645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:11.983849] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:11.984601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:11.984639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:11.984650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:11.984888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:11.985110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:11.985119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:11.985127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:11.988684] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:11.997741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:11.998567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:11.998605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:11.998616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:11.998854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:11.999077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:11.999086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:11.999093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:12.002647] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:12.011643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:12.012287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:12.012306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:12.012317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:12.012537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:12.012756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:12.012764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:12.012771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:12.016358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:12.025559] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:12.026082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:12.026097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:12.026105] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:12.026328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:12.026547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:12.026555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:12.026562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:12.030101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:12.039507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:12.040047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:12.040061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:12.040069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:12.040291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:12.040510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:12.040517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:12.040524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:12.044061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:12.053461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:12.054187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:12.054225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:12.054237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:12.054486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:12.054709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:12.054721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:12.054729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:12.058277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:12.067270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:12.067888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:12.067906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.864 [2024-06-10 10:54:12.067914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.864 [2024-06-10 10:54:12.068132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.864 [2024-06-10 10:54:12.068356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.864 [2024-06-10 10:54:12.068364] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.864 [2024-06-10 10:54:12.068370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.864 [2024-06-10 10:54:12.071913] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.864 [2024-06-10 10:54:12.081107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.864 [2024-06-10 10:54:12.081865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.864 [2024-06-10 10:54:12.081903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.865 [2024-06-10 10:54:12.081914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.865 [2024-06-10 10:54:12.082152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.865 [2024-06-10 10:54:12.082383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.865 [2024-06-10 10:54:12.082391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.865 [2024-06-10 10:54:12.082399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.865 [2024-06-10 10:54:12.085945] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.865 [2024-06-10 10:54:12.094945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.865 [2024-06-10 10:54:12.095415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.865 [2024-06-10 10:54:12.095453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.865 [2024-06-10 10:54:12.095465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.865 [2024-06-10 10:54:12.095706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.865 [2024-06-10 10:54:12.095929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.865 [2024-06-10 10:54:12.095938] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.865 [2024-06-10 10:54:12.095945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.865 [2024-06-10 10:54:12.099513] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.865 [2024-06-10 10:54:12.108924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.865 [2024-06-10 10:54:12.109626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.865 [2024-06-10 10:54:12.109664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.865 [2024-06-10 10:54:12.109675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.865 [2024-06-10 10:54:12.109913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.865 [2024-06-10 10:54:12.110136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.865 [2024-06-10 10:54:12.110144] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.865 [2024-06-10 10:54:12.110151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.865 [2024-06-10 10:54:12.113705] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.865 [2024-06-10 10:54:12.122906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.865 [2024-06-10 10:54:12.123648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.865 [2024-06-10 10:54:12.123685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.865 [2024-06-10 10:54:12.123696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.865 [2024-06-10 10:54:12.123934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.865 [2024-06-10 10:54:12.124157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.865 [2024-06-10 10:54:12.124165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.865 [2024-06-10 10:54:12.124172] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.865 [2024-06-10 10:54:12.127728] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.865 [2024-06-10 10:54:12.136724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.865 [2024-06-10 10:54:12.137534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.865 [2024-06-10 10:54:12.137572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:47.865 [2024-06-10 10:54:12.137583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:47.865 [2024-06-10 10:54:12.137821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:47.865 [2024-06-10 10:54:12.138044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.865 [2024-06-10 10:54:12.138053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.865 [2024-06-10 10:54:12.138060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.865 [2024-06-10 10:54:12.141615] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.127 [2024-06-10 10:54:12.150603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.127 [2024-06-10 10:54:12.151261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.127 [2024-06-10 10:54:12.151280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.127 [2024-06-10 10:54:12.151288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.127 [2024-06-10 10:54:12.151512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.127 [2024-06-10 10:54:12.151731] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.127 [2024-06-10 10:54:12.151739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.127 [2024-06-10 10:54:12.151745] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.127 [2024-06-10 10:54:12.155290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.127 [2024-06-10 10:54:12.164480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.127 [2024-06-10 10:54:12.165067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.127 [2024-06-10 10:54:12.165082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.127 [2024-06-10 10:54:12.165089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.127 [2024-06-10 10:54:12.165312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.127 [2024-06-10 10:54:12.165531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.127 [2024-06-10 10:54:12.165539] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.127 [2024-06-10 10:54:12.165546] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.127 [2024-06-10 10:54:12.169086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.127 [2024-06-10 10:54:12.178282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.127 [2024-06-10 10:54:12.178877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.127 [2024-06-10 10:54:12.178892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.127 [2024-06-10 10:54:12.178899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.127 [2024-06-10 10:54:12.179117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.127 [2024-06-10 10:54:12.179341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.127 [2024-06-10 10:54:12.179349] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.127 [2024-06-10 10:54:12.179356] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.127 [2024-06-10 10:54:12.182896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.127 [2024-06-10 10:54:12.192084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.127 [2024-06-10 10:54:12.192715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.127 [2024-06-10 10:54:12.192730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.127 [2024-06-10 10:54:12.192737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.127 [2024-06-10 10:54:12.192955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.127 [2024-06-10 10:54:12.193174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.127 [2024-06-10 10:54:12.193181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.127 [2024-06-10 10:54:12.193192] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.127 [2024-06-10 10:54:12.196744] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.127 [2024-06-10 10:54:12.205971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.127 [2024-06-10 10:54:12.206384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.127 [2024-06-10 10:54:12.206400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.127 [2024-06-10 10:54:12.206407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.127 [2024-06-10 10:54:12.206626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.206844] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.206852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.206859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.210444] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.219844] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.220563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.220600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.220611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.220848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.221072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.221080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.221088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.224645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.233636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.234344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.234381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.234393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.234635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.234858] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.234867] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.234874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.238430] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.247629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.248344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.248383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.248394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.248634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.248857] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.248865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.248873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.252428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.261628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.262198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.262236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.262254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.262493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.262716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.262724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.262731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.266281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.275482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.276192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.276230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.276249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.276491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.276714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.276721] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.276729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.280277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.289478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.289821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.289846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.289854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.290078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.290312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.290320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.290327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.293872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.303288] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.303888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.303903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.303911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.304129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.304352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.304360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.304367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.307909] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.317102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.317770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.317808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.317818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.318056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.318289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.318298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.318306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.321852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.331060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.331667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.331685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.331693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.331911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.332130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.332137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.332144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.128 [2024-06-10 10:54:12.335697] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.128 [2024-06-10 10:54:12.344902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.128 [2024-06-10 10:54:12.345509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.128 [2024-06-10 10:54:12.345524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.128 [2024-06-10 10:54:12.345532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.128 [2024-06-10 10:54:12.345750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.128 [2024-06-10 10:54:12.345970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.128 [2024-06-10 10:54:12.345978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.128 [2024-06-10 10:54:12.345984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.129 [2024-06-10 10:54:12.349529] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.129 [2024-06-10 10:54:12.358725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.129 [2024-06-10 10:54:12.359446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.129 [2024-06-10 10:54:12.359484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.129 [2024-06-10 10:54:12.359494] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.129 [2024-06-10 10:54:12.359732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.129 [2024-06-10 10:54:12.359955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.129 [2024-06-10 10:54:12.359963] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.129 [2024-06-10 10:54:12.359970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.129 [2024-06-10 10:54:12.363527] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.129 [2024-06-10 10:54:12.372524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.129 [2024-06-10 10:54:12.373216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.129 [2024-06-10 10:54:12.373261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.129 [2024-06-10 10:54:12.373274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.129 [2024-06-10 10:54:12.373513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.129 [2024-06-10 10:54:12.373737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.129 [2024-06-10 10:54:12.373745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.129 [2024-06-10 10:54:12.373752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.129 [2024-06-10 10:54:12.377304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.129 [2024-06-10 10:54:12.386503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.129 [2024-06-10 10:54:12.387238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.129 [2024-06-10 10:54:12.387283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.129 [2024-06-10 10:54:12.387297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.129 [2024-06-10 10:54:12.387535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.129 [2024-06-10 10:54:12.387758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.129 [2024-06-10 10:54:12.387767] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.129 [2024-06-10 10:54:12.387774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.129 [2024-06-10 10:54:12.391324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.129 [2024-06-10 10:54:12.400326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.129 [2024-06-10 10:54:12.400848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.129 [2024-06-10 10:54:12.400865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.129 [2024-06-10 10:54:12.400873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.129 [2024-06-10 10:54:12.401092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.129 [2024-06-10 10:54:12.401316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.129 [2024-06-10 10:54:12.401324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.129 [2024-06-10 10:54:12.401331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.129 [2024-06-10 10:54:12.404870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.391 [2024-06-10 10:54:12.414308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.391 [2024-06-10 10:54:12.415000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.391 [2024-06-10 10:54:12.415037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.391 [2024-06-10 10:54:12.415049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.391 [2024-06-10 10:54:12.415296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.391 [2024-06-10 10:54:12.415520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.391 [2024-06-10 10:54:12.415528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.391 [2024-06-10 10:54:12.415536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.391 [2024-06-10 10:54:12.419082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.391 [2024-06-10 10:54:12.428286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.391 [2024-06-10 10:54:12.428925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.391 [2024-06-10 10:54:12.428943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.391 [2024-06-10 10:54:12.428950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.391 [2024-06-10 10:54:12.429170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.391 [2024-06-10 10:54:12.429394] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.391 [2024-06-10 10:54:12.429407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.391 [2024-06-10 10:54:12.429414] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.391 [2024-06-10 10:54:12.432958] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.391 [2024-06-10 10:54:12.442155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.391 [2024-06-10 10:54:12.442860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.391 [2024-06-10 10:54:12.442898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.391 [2024-06-10 10:54:12.442909] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.391 [2024-06-10 10:54:12.443148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.391 [2024-06-10 10:54:12.443378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.391 [2024-06-10 10:54:12.443387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.391 [2024-06-10 10:54:12.443395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.391 [2024-06-10 10:54:12.446943] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.391 [2024-06-10 10:54:12.456143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.391 [2024-06-10 10:54:12.456775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.391 [2024-06-10 10:54:12.456794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.391 [2024-06-10 10:54:12.456801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.391 [2024-06-10 10:54:12.457020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.391 [2024-06-10 10:54:12.457239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.391 [2024-06-10 10:54:12.457254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.391 [2024-06-10 10:54:12.457261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.391 [2024-06-10 10:54:12.460803] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.391 [2024-06-10 10:54:12.470110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.391 [2024-06-10 10:54:12.470810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.391 [2024-06-10 10:54:12.470848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.391 [2024-06-10 10:54:12.470859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.391 [2024-06-10 10:54:12.471097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.391 [2024-06-10 10:54:12.471328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.391 [2024-06-10 10:54:12.471337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.391 [2024-06-10 10:54:12.471345] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.391 [2024-06-10 10:54:12.474893] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.391 [2024-06-10 10:54:12.484094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.391 [2024-06-10 10:54:12.484838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.391 [2024-06-10 10:54:12.484875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.484886] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.485124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.392 [2024-06-10 10:54:12.485358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.392 [2024-06-10 10:54:12.485367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.392 [2024-06-10 10:54:12.485374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.392 [2024-06-10 10:54:12.488922] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.392 [2024-06-10 10:54:12.497918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.392 [2024-06-10 10:54:12.498624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.392 [2024-06-10 10:54:12.498662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.498672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.498911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.392 [2024-06-10 10:54:12.499134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.392 [2024-06-10 10:54:12.499143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.392 [2024-06-10 10:54:12.499150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.392 [2024-06-10 10:54:12.502705] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.392 [2024-06-10 10:54:12.511917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.392 [2024-06-10 10:54:12.512645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.392 [2024-06-10 10:54:12.512682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.512694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.512934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.392 [2024-06-10 10:54:12.513157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.392 [2024-06-10 10:54:12.513168] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.392 [2024-06-10 10:54:12.513176] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.392 [2024-06-10 10:54:12.516733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.392 [2024-06-10 10:54:12.525722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.392 [2024-06-10 10:54:12.526505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.392 [2024-06-10 10:54:12.526543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.526553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.526796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.392 [2024-06-10 10:54:12.527020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.392 [2024-06-10 10:54:12.527028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.392 [2024-06-10 10:54:12.527036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.392 [2024-06-10 10:54:12.530593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.392 [2024-06-10 10:54:12.539579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.392 [2024-06-10 10:54:12.540325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.392 [2024-06-10 10:54:12.540363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.540375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.540617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.392 [2024-06-10 10:54:12.540840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.392 [2024-06-10 10:54:12.540848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.392 [2024-06-10 10:54:12.540856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.392 [2024-06-10 10:54:12.544410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.392 [2024-06-10 10:54:12.553399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.392 [2024-06-10 10:54:12.554128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.392 [2024-06-10 10:54:12.554165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.554176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.554421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.392 [2024-06-10 10:54:12.554645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.392 [2024-06-10 10:54:12.554653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.392 [2024-06-10 10:54:12.554661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.392 [2024-06-10 10:54:12.558207] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.392 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:48.392 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:28:48.392 10:54:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:48.392 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:48.392 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.392 [2024-06-10 10:54:12.567197] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.392 [2024-06-10 10:54:12.567902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.392 [2024-06-10 10:54:12.567940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.567952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.568196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.392 [2024-06-10 10:54:12.568428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.392 [2024-06-10 10:54:12.568437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.392 [2024-06-10 10:54:12.568445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.392 [2024-06-10 10:54:12.571993] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.392 [2024-06-10 10:54:12.581193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.392 [2024-06-10 10:54:12.581896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.392 [2024-06-10 10:54:12.581935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.581945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.582183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.392 [2024-06-10 10:54:12.582415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.392 [2024-06-10 10:54:12.582425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.392 [2024-06-10 10:54:12.582432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.392 [2024-06-10 10:54:12.585979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.392 [2024-06-10 10:54:12.595178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.392 [2024-06-10 10:54:12.595653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.392 [2024-06-10 10:54:12.595671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.595679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.595899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.392 [2024-06-10 10:54:12.596117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.392 [2024-06-10 10:54:12.596125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.392 [2024-06-10 10:54:12.596132] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.392 [2024-06-10 10:54:12.599689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.392 10:54:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.392 10:54:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.392 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.392 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.392 [2024-06-10 10:54:12.607069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.392 [2024-06-10 10:54:12.609098] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.392 [2024-06-10 10:54:12.609698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.392 [2024-06-10 10:54:12.609713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.392 [2024-06-10 10:54:12.609725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.392 [2024-06-10 10:54:12.609943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.393 [2024-06-10 10:54:12.610161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.393 [2024-06-10 10:54:12.610169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.393 [2024-06-10 10:54:12.610176] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.393 [2024-06-10 10:54:12.613722] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.393 [2024-06-10 10:54:12.622944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.393 [2024-06-10 10:54:12.623509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.393 [2024-06-10 10:54:12.623546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.393 [2024-06-10 10:54:12.623558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.393 [2024-06-10 10:54:12.623797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.393 [2024-06-10 10:54:12.624020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.393 [2024-06-10 10:54:12.624028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.393 [2024-06-10 10:54:12.624035] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.393 [2024-06-10 10:54:12.627591] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.393 Malloc0 00:28:48.393 [2024-06-10 10:54:12.636792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.393 [2024-06-10 10:54:12.637553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.393 [2024-06-10 10:54:12.637591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.393 [2024-06-10 10:54:12.637602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.393 [2024-06-10 10:54:12.637841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.393 [2024-06-10 10:54:12.638065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.393 [2024-06-10 10:54:12.638074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.393 [2024-06-10 10:54:12.638082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.393 [2024-06-10 10:54:12.641633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.393 [2024-06-10 10:54:12.650618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.393 [2024-06-10 10:54:12.651172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.393 [2024-06-10 10:54:12.651209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.393 [2024-06-10 10:54:12.651220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.393 [2024-06-10 10:54:12.651468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.393 [2024-06-10 10:54:12.651693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.393 [2024-06-10 10:54:12.651701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.393 [2024-06-10 10:54:12.651709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.393 [2024-06-10 10:54:12.655327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.393 [2024-06-10 10:54:12.664535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.393 [2024-06-10 10:54:12.665222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.393 [2024-06-10 10:54:12.665265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb8130 with addr=10.0.0.2, port=4420 00:28:48.393 [2024-06-10 10:54:12.665276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8130 is same with the state(5) to be set 00:28:48.393 [2024-06-10 10:54:12.665514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb8130 (9): Bad file descriptor 00:28:48.393 [2024-06-10 10:54:12.665737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.393 [2024-06-10 10:54:12.665746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.393 [2024-06-10 10:54:12.665753] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.393 [2024-06-10 10:54:12.668336] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:48.393 [2024-06-10 10:54:12.668514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.393 [2024-06-10 10:54:12.669304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:48.393 10:54:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1016357 00:28:48.654 [2024-06-10 10:54:12.678502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.654 [2024-06-10 10:54:12.850083] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:58.692 00:28:58.692 Latency(us) 00:28:58.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.692 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:58.692 Verification LBA range: start 0x0 length 0x4000 00:28:58.692 Nvme1n1 : 15.00 8164.12 31.89 9953.27 0.00 7039.97 795.31 16274.77 00:28:58.692 =================================================================================================================== 00:28:58.692 Total : 8164.12 31.89 9953.27 0.00 7039.97 795.31 16274.77 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:58.692 rmmod nvme_tcp 00:28:58.692 rmmod nvme_fabrics 00:28:58.692 rmmod nvme_keyring 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1017832 ']' 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1017832 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 1017832 ']' 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 1017832 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:28:58.692 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1017832 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1017832' 00:28:58.693 killing process with pid 1017832 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 1017832 00:28:58.693 [2024-06-10 10:54:21.473491] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 1017832 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.693 10:54:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.634 10:54:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.634 00:28:59.634 real 0m27.833s 00:28:59.634 user 1m3.133s 00:28:59.634 sys 0m6.978s 00:28:59.634 10:54:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:59.634 10:54:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.634 ************************************ 00:28:59.634 END TEST nvmf_bdevperf 00:28:59.634 ************************************ 00:28:59.634 10:54:23 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:59.634 10:54:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:28:59.634 10:54:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:59.634 10:54:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:59.634 ************************************ 00:28:59.634 START TEST nvmf_target_disconnect 00:28:59.634 ************************************ 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:59.634 * Looking for test storage... 00:28:59.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:59.634 10:54:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:07.775 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:07.775 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:07.775 Found net devices under 0000:31:00.0: cvl_0_0 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.775 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:07.776 Found net devices under 0000:31:00.1: cvl_0_1 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:07.776 10:54:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:07.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:29:07.776 00:29:07.776 --- 10.0.0.2 ping statistics --- 00:29:07.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.776 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:29:07.776 00:29:07.776 --- 10.0.0.1 ping statistics --- 00:29:07.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.776 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:07.776 ************************************ 00:29:07.776 START TEST nvmf_target_disconnect_tc1 00:29:07.776 ************************************ 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.776 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.776 [2024-06-10 10:54:31.237527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.776 [2024-06-10 10:54:31.237592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19af280 with addr=10.0.0.2, port=4420 00:29:07.776 [2024-06-10 10:54:31.237623] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:07.776 [2024-06-10 10:54:31.237638] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:07.776 [2024-06-10 10:54:31.237645] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:07.776 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:07.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:07.776 Initializing NVMe Controllers 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:07.776 00:29:07.776 real 0m0.106s 00:29:07.776 user 0m0.055s 00:29:07.776 sys 0m0.050s 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:07.776 ************************************ 00:29:07.776 END TEST nvmf_target_disconnect_tc1 00:29:07.776 ************************************ 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:07.776 ************************************ 00:29:07.776 START TEST nvmf_target_disconnect_tc2 00:29:07.776 ************************************ 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1023940 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1023940 00:29:07.776 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:07.777 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1023940 ']' 00:29:07.777 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.777 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:07.777 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.777 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:07.777 10:54:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.777 [2024-06-10 10:54:31.391958] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:29:07.777 [2024-06-10 10:54:31.392016] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.777 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.777 [2024-06-10 10:54:31.465035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.777 [2024-06-10 10:54:31.560519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.777 [2024-06-10 10:54:31.560578] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.777 [2024-06-10 10:54:31.560586] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.777 [2024-06-10 10:54:31.560593] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.777 [2024-06-10 10:54:31.560599] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.777 [2024-06-10 10:54:31.560696] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:29:07.777 [2024-06-10 10:54:31.560837] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:29:07.777 [2024-06-10 10:54:31.561323] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:29:07.777 [2024-06-10 10:54:31.561325] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.039 Malloc0 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.039 [2024-06-10 10:54:32.242000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.039 [2024-06-10 10:54:32.270025] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:08.039 [2024-06-10 10:54:32.270349] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1024287 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:08.039 10:54:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:08.298 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.219 10:54:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1023940 00:29:10.219 10:54:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Write completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Write completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Write completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Write completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Write completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Write completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Write completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Write completed with error (sct=0, sc=8) 00:29:10.219 starting I/O failed 00:29:10.219 Read completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 Read completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 Write completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 Write completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 Read completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 Read completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 Read completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 Read completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 Write completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 Write completed with error (sct=0, sc=8) 00:29:10.220 starting I/O failed 00:29:10.220 [2024-06-10 10:54:34.299018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:10.220 [2024-06-10 10:54:34.299641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.299679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.299953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.299965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.300481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.300519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.300863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.300876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.301151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.301161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.301516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.301554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.301932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.301945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.302511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.302557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.302909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.302922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.303279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.303290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.303660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.303670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.303952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.303962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.304309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.304320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.304716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.304726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.304975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.304987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.305272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.305283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.305586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.305596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.305934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.305944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.306325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.306335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.306700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.306710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.307085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.307095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.307530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.307541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.307967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.307977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.308202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.308212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.308592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.308603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.308953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.308964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.309261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.309271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.309532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.309542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.309914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.309924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.310312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.310322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.310723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.310733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.311117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.311126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.311531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.311541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.220 [2024-06-10 10:54:34.311827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.220 [2024-06-10 10:54:34.311837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.220 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.312089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.312100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.312461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.312471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.312796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.312805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.313182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.313191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.313484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.313494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.313883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.313893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.314113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.314122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.314499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.314509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.314852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.314862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.315070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.315082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.315423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.315433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.315725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.315735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.316123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.316133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.316497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.316506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.316856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.316865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.317252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.317262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.317566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.317575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.317953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.317962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.318182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.318192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.318532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.318542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.318922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.318931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.319322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.319332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.319695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.319705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.320078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.320087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.320445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.320454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.320824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.320833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.321152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.321161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.321557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.321567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.321924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.321934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.322252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.322261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.322512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.322521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.322874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.322883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.323160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.323170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.323559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.323569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.323912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.221 [2024-06-10 10:54:34.323922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.221 qpair failed and we were unable to recover it. 00:29:10.221 [2024-06-10 10:54:34.324300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.324309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.324673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.324682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.325021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.325030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.325373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.325383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.325748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.325757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.326134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.326143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.326469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.326480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.326865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.326874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.327213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.327222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.327563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.327572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.327960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.327969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.328314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.328324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.328675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.328684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.328897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.328906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.329185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.329194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.329536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.329546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.329879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.329888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.330229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.330238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.330616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.330625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.330959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.330969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.331301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.331310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.331695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.331704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.332093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.332103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.332444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.332453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.332835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.332844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.333229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.333239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.333498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.333507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.333850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.333859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.334161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.334170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.334486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.334495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.334793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.334802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.335188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.335197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.335543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.335552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.222 [2024-06-10 10:54:34.335947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.222 [2024-06-10 10:54:34.335959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.222 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.336300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.336309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.336663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.336672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.337024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.337033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.337381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.337390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.337745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.337754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.338142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.338152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.338386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.338395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.338711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.338721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.339065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.339075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.339461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.339471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.339800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.339810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.340194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.340202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.340542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.340551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.340929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.340939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.341312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.341321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.341586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.341595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.341977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.341986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.342361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.342370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.342764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.342774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.343119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.343128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.343415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.343425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.343707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.343716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.343892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.343901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.344256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.344266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.344610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.344619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.344992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.345001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.345422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.345433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.345683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.345692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.346018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.346027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.346374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.346383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.346712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.346721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.347116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.347125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.347528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.347537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.347872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.347881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.348244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.223 [2024-06-10 10:54:34.348253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.223 qpair failed and we were unable to recover it. 00:29:10.223 [2024-06-10 10:54:34.348640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.348649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.349036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.349046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.349414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.349425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.349801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.349810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.350188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.350197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.350428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.350438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.350835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.350844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.351179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.351188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.351631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.351641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.351989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.351999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.352377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.352387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.352754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.352763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.353147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.353157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.353510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.353520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.353875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.353891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.354260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.354269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.354637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.354646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.354999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.355009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.355335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.355347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.355669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.355679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.356061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.356070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.356413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.356422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.356790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.356799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.357180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.357189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.357425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.357434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.357786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.357794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.358047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.224 [2024-06-10 10:54:34.358056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.224 qpair failed and we were unable to recover it. 00:29:10.224 [2024-06-10 10:54:34.358401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.358411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.358769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.358779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.359121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.359130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.359543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.359552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.359934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.359943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.360275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.360285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.360549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.360558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.360890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.360898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.361064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.361072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.361410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.361420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.361785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.361794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.362169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.362179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.362528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.362537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.362908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.362917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.363280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.363290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.363676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.363685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.364062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.364071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.364342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.364351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.364768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.364777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.365024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.365033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.365410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.365420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.365760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.365769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.366132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.366142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.366517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.366526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.366903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.366912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.367151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.367160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.367518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.367527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.367860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.367869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.368172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.368181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.368549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.368558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.368935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.368944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.369155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.369166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.369389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.369400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.369666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.369675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.369924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.369933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.370283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.370293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.225 [2024-06-10 10:54:34.370660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.225 [2024-06-10 10:54:34.370669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.225 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.371044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.371053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.371386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.371395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.371781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.371790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.372121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.372130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.372477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.372486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.372861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.372870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.373246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.373256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.373592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.373602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.373982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.373991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.374335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.374345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.374537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.374547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.374883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.374892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.375250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.375260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.375591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.375600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.375941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.375950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.376274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.376284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.376657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.376666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.376905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.376914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.377208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.377218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.377453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.377463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.377666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.377676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.378100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.378110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.378492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.378504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.378875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.378884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.379232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.379250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.379593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.379602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.379943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.379952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.380309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.380319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.380687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.380696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.381032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.381041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.381396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.381405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.381756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.381765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.382108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.382117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.382418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.382427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.382780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.382790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.383126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.383135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.383488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.226 [2024-06-10 10:54:34.383497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.226 qpair failed and we were unable to recover it. 00:29:10.226 [2024-06-10 10:54:34.383846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.383856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.384230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.384239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.384612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.384621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.384872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.384881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.385263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.385273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.385514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.385523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.385748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.385758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.386126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.386135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.386478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.386487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.386792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.386801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.387171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.387180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.387575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.387585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.387945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.387957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.388330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.388339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.388701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.388711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.389065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.389074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.389419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.389428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.389805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.389814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.390196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.390205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.390579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.390588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.390786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.390796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.391174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.391183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.391536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.391545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.391894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.391903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.392208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.392217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.392578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.392588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.392944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.392954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.393320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.393330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.393610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.393619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.393966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.393975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.394273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.394284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.394536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.394545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.394875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.394884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.395248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.395257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.395646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.395655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.395868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.227 [2024-06-10 10:54:34.395878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.227 qpair failed and we were unable to recover it. 00:29:10.227 [2024-06-10 10:54:34.396229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.396238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.396616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.396625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.396960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.396970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.397328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.397337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.397675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.397684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.398041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.398051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.398389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.398398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.398739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.398748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.399097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.399107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.399459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.399469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.399805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.399814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.400056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.400065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.400368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.400377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.400606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.400615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.400982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.400992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.401400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.401410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.401779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.401788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.402141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.402150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.402524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.402533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.402873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.402882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.403215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.403224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.403504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.403513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.403870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.403879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.404133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.404142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.404455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.404464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.404790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.404800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.405275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.405284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.405645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.405654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.405830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.405841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.406183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.406192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.406560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.406570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.406851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.406860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.407194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.407203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.407565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.407575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.407837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.407846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.408153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.408162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.408508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.408517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.408865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.228 [2024-06-10 10:54:34.408879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.228 qpair failed and we were unable to recover it. 00:29:10.228 [2024-06-10 10:54:34.409269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.409279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.409646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.409655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.409991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.410000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.410356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.410365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.410719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.410729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.411073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.411081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.411462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.411474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.411833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.411842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.412178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.412187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.412529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.412539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.412901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.412910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.413250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.413259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.413603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.413612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.413972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.413980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.414317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.414327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.414685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.414694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.415026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.415035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.415366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.415376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.415728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.415737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.415983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.415992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.416390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.416399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.416632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.416642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.417042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.417051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.417382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.417391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.417653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.417662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.418011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.418020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.418371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.418381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.418731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.418740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.419072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.229 [2024-06-10 10:54:34.419081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.229 qpair failed and we were unable to recover it. 00:29:10.229 [2024-06-10 10:54:34.419481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.419491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.419798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.419807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.420172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.420182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.420543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.420552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.420930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.420941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.421299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.421309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.421684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.421693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.422051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.422060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.422313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.422322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.422711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.422720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.423097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.423107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.423306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.423316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.423696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.423705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.424073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.424083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.424442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.424452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.424877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.424886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.425262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.425271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.425633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.425642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.425955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.425964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.426300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.426310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.426676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.426685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.427021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.427030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.427371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.427380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.427748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.427758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.428122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.428131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.428544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.428553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.428866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.428875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.429246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.429255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.429596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.429605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.429953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.429969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.430324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.430333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.430660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.430671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.431038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.431046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.431431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.431441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.431787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.431796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.432156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.432165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.230 [2024-06-10 10:54:34.432512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.230 [2024-06-10 10:54:34.432522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.230 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.432873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.432883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.433262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.433271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.433714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.433724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.434134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.434143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.434477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.434487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.434846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.434855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.435190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.435199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.435565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.435575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.435935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.435944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.436308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.436317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.436611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.436620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.436978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.436987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.437330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.437339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.437708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.437718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.437933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.437943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.438313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.438323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.438696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.438711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.438975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.438984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.439207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.439216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.439604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.439613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.439963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.439973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.440230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.440240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.440614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.440624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.440981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.440990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.441328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.441337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.441674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.441683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.442032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.442042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.442419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.442428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.442764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.442773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.443129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.443137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.443512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.443521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.443900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.443910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.444130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.444141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.444502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.444511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.231 qpair failed and we were unable to recover it. 00:29:10.231 [2024-06-10 10:54:34.444757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.231 [2024-06-10 10:54:34.444766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.445113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.445123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.445494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.445503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.445835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.445844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.446166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.446175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.446534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.446543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.446874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.446883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.447208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.447217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.447530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.447539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.447897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.447907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.448262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.448271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.448613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.448622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.448981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.448990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.449364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.449373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.449713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.449722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.450080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.450089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.450428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.450437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.450797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.450806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.451194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.451203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.451542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.451552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.451894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.451903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.452295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.452305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.452629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.452638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.453016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.453025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.453328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.453337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.453682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.453691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.454104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.454113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.454444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.454453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.454818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.454829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.455201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.455210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.455547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.455556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.455908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.455924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.456303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.456315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.456694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.456703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.456971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.232 [2024-06-10 10:54:34.456981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.232 qpair failed and we were unable to recover it. 00:29:10.232 [2024-06-10 10:54:34.457325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.457334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.457699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.457716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.458076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.458085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.458463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.458472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.458815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.458824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.459074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.459083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.459465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.459475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.459858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.459868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.460226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.460236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.460617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.460627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.460955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.460965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.461322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.461332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.461708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.461723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.462077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.462086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.462495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.462505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.462901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.462910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.463276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.463291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.463614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.463623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.463956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.463965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.464317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.464326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.464705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.464716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.465092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.465101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.465422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.465432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.465684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.465692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.466108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.466117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.466454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.466464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.466800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.466809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.467142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.467151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.467480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.467489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.467842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.467851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.468097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.468106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.468409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.468418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.468753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.468763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.469150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.469160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.469520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.469530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.469781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.469791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.233 [2024-06-10 10:54:34.470168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.233 [2024-06-10 10:54:34.470178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.233 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.470533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.470543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.470907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.470917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.471303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.471312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.471691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.471701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.472133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.472142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.472404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.472413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.472797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.472806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.473166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.473175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.473519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.473529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.473837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.473846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.474206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.474216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.474594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.474604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.474877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.474886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.475296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.475305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.475645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.475654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.476004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.476021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.476377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.476387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.476760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.476769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.477037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.477046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.477318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.477336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.477663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.477672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.478004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.478014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.478371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.478380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.478726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.478735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.479085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.479095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.479452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.479461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.479801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.479811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.234 qpair failed and we were unable to recover it. 00:29:10.234 [2024-06-10 10:54:34.480177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.234 [2024-06-10 10:54:34.480186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.480596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.480607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.480943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.480952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.481320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.481329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.481593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.481602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.481964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.481972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.482338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.482348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.482548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.482559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.482898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.482908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.483260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.483269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.483582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.483592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.483970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.483979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.484319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.484330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.484705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.484714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.485046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.485056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.485439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.485448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.485793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.485803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.486159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.486173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.486431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.486441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.486775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.486785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.487004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.487014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.487394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.487404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.487768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.487778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.488157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.488165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.488509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.488520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.488881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.488890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.489254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.489264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.489668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.489677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.490152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.490161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.490497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.490506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.490910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.490919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.491257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.491266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.491573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.491583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.491951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.491960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.492351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.492360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.492742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.492752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.493110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.493120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.235 qpair failed and we were unable to recover it. 00:29:10.235 [2024-06-10 10:54:34.493493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.235 [2024-06-10 10:54:34.493502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.493832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.493842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.494089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.494098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.494416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.494425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.494756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.494765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.495119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.495128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.495461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.495470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.495761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.495770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.496144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.496153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.496499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.496508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.496855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.496864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.497078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.497087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.497451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.497460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.497810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.497819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.498148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.498159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.236 [2024-06-10 10:54:34.498534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.236 [2024-06-10 10:54:34.498543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.236 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.498876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.498887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.499236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.499248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.499606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.499615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.499827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.499837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.500086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.500096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.500457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.500466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.500719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.500729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.501101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.501110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.501442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.501453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.501808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.501817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.502154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.502163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.502382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.502391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.502772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.502781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.503114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.503123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.503597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.503606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.503996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.504005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.504357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.504367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.504747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.504756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.505056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.505065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.505433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.505442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.508 qpair failed and we were unable to recover it. 00:29:10.508 [2024-06-10 10:54:34.505778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.508 [2024-06-10 10:54:34.505787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.506139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.506148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.506414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.506423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.506779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.506788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.507134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.507143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.507521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.507532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.507875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.507885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.508244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.508254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.508635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.508644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.508984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.508993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.509388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.509398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.509808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.509817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.510150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.510159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.510498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.510508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.510713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.510723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.511065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.511074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.511299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.511309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.511671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.511681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.512058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.512067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.512400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.512410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.512722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.512731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.513079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.513088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.513438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.513448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.513720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.513730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.514084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.514093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.514428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.514437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.514805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.514814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.515169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.515178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.515540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.515549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.515922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.515931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.516301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.516310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.516673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.516682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.509 [2024-06-10 10:54:34.517040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.509 [2024-06-10 10:54:34.517049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.509 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.517375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.517386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.517616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.517625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.517977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.517986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.518343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.518352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.518710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.518719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.519012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.519021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.519383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.519392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.519689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.519699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.520058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.520067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.520406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.520415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.520785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.520794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.521138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.521147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.521504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.521513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.521903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.521913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.522255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.522265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.522472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.522481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.522833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.522842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.523088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.523097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.523451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.523461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.523800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.523809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.524159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.524169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.524555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.524564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.524904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.524913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.525272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.525282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.525638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.525647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.525984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.525993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.526469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.526478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.526809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.526818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.527061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.527069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.527416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.527426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.527826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.527835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.528171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.528180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.528531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.528541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.528945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.510 [2024-06-10 10:54:34.528954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.510 qpair failed and we were unable to recover it. 00:29:10.510 [2024-06-10 10:54:34.529310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.529320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.529555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.529564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.529965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.529974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.530310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.530319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.530682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.530691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.531050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.531059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.531415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.531427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.531762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.531772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.532127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.532136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.532358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.532368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.532738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.532747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.533082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.533091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.533322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.533331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.533696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.533705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.534075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.534084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.534433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.534442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.534798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.534807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.535217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.535227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.535582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.535592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.535857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.535867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.536129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.536139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.536505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.536514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.536889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.536899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.537277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.537287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.537647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.537657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.538017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.538026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.538392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.538402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.538779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.538788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.539120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.539129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.539500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.539509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.539841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.539850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.540210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.540219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.540586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.540595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.540929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.540940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.541299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.541308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.541662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.541671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.541853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.511 [2024-06-10 10:54:34.541863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.511 qpair failed and we were unable to recover it. 00:29:10.511 [2024-06-10 10:54:34.542090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.542099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.542454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.542463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.542843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.542851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.543144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.543154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.543510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.543519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.543850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.543859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.544144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.544154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.544400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.544409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.544779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.544788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.545044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.545053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.545314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.545323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.545682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.545691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.546030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.546039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.546374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.546384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.546743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.546752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.547107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.547116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.547358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.547367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.547732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.547741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.548076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.548085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.548309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.548319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.548692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.548701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.549066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.549076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.549405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.549414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.549772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.549787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.550155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.550165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.550511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.550522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.550836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.550844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.551215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.551225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.551691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.551700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.552028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.552037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.552323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.552332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.552687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.552696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.553021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.553030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.553388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.553397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.553726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.512 [2024-06-10 10:54:34.553736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.512 qpair failed and we were unable to recover it. 00:29:10.512 [2024-06-10 10:54:34.554095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.554104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.554442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.554451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.554829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.554847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.555226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.555235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.555551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.555560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.555919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.555928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.556266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.556275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.556722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.556732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.556939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.556949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.557299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.557309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.557661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.557670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.558002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.558011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.558384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.558393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.558751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.558760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.559106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.559115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.559501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.559511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.559869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.559879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.560297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.560307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.560640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.560649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.561063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.561072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.561359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.561369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.561729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.561738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.561988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.561997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.562328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.562338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.562676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.562684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.563116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.563125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.563569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.563578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.563928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.563937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.564301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.564310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.564682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.513 [2024-06-10 10:54:34.564692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.513 qpair failed and we were unable to recover it. 00:29:10.513 [2024-06-10 10:54:34.565025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.565034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.565398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.565407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.565645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.565655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.566067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.566076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.566411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.566421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.566807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.566816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.567172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.567181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.567562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.567572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.567927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.567937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.568231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.568244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.568563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.568572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.568936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.568946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.569312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.569322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.569630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.569639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.570009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.570017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.570353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.570362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.570656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.570665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.570894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.570903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.571240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.571251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.571593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.571603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.571961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.571970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.572206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.572215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.572449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.572459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.572807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.572816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.573178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.573187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.573543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.573552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.573918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.573930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.574367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.574376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.574735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.574744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.575119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.575128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.575508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.575517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.575877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.575887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.576131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.576142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.576419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.576429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.576779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.576788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.577131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.577141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.577517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.577527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.577867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.514 [2024-06-10 10:54:34.577877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.514 qpair failed and we were unable to recover it. 00:29:10.514 [2024-06-10 10:54:34.578247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.578257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.578588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.578597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.578813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.578823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.579186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.579195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.579534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.579543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.579909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.579919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.580278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.580287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.580692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.580701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.580915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.580925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.581154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.581163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.581529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.581538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.581829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.581838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.582249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.582258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.582600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.582609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.582974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.582983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.583398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.583409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.583731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.583740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.584101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.584110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.584320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.584330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.584575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.584585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.584946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.584955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.585253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.585263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.585548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.585557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.585894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.585911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.586272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.586281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.586652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.586661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.586963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.586973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.587347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.587356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.587630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.587639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.588025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.588034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.588371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.588381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.588794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.588803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.589143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.589152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.589375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.589384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.589824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.589833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.590035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.590044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.590423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.590432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.590680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.515 [2024-06-10 10:54:34.590689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.515 qpair failed and we were unable to recover it. 00:29:10.515 [2024-06-10 10:54:34.590918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.590927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.591172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.591181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.591548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.591558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.591827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.591836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.591978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.591988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.592350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.592359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.592704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.592712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.593076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.593086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.593490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.593499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.593852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.593861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.594220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.594229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.594453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.594462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.594823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.594833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.595220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.595230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.595587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.595597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.595968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.595978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.596344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.596353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.596706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.596715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.597074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.597085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.597423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.597432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.597811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.597821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.598175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.598184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.598418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.598428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.598558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.598567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.598809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.598818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.599148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.599157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.599535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.599544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.599796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.599805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.600136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.600145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.600490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.600499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.600749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.600758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.601167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.601176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.601484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.601495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.601855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.601864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.602197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.602207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.602560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.602569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.516 qpair failed and we were unable to recover it. 00:29:10.516 [2024-06-10 10:54:34.602902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.516 [2024-06-10 10:54:34.602911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.603271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.603281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.603641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.603650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.603991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.604000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.604353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.604362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.604700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.604709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.605047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.605057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.605431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.605441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.605807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.605815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.606171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.606182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.606542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.606551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.606890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.606899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.607160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.607169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.607299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.607308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.607679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.607688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.608113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.608122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.608471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.608481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.608859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.608868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.609203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.609212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.609590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.609600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.609959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.609968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.610306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.610316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.610682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.610691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.610914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.610923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.611173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.611182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.611540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.611550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.611711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.611721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.612044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.612053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.612402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.612411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.612764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.612773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.613130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.613139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.613485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.613497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.613848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.613857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.614235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.614249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.614618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.614627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.517 qpair failed and we were unable to recover it. 00:29:10.517 [2024-06-10 10:54:34.615003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.517 [2024-06-10 10:54:34.615011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.615270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.615281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.615640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.615650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.615976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.615987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.616345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.616355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.616800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.616809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.617147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.617156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.617531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.617541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.617788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.617797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.618160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.618170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.618493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.618502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.618865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.618874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.619233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.619247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.619623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.619632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.619783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.619794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.620169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.620178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.620544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.620554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.620909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.620919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.621289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.621298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.621655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.621665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.621988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.621998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.622360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.622370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.622713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.622723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.623039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.623049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.623447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.623456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.623820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.623830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.624059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.624068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.624393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.624403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.518 qpair failed and we were unable to recover it. 00:29:10.518 [2024-06-10 10:54:34.624784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.518 [2024-06-10 10:54:34.624794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.625136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.625145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.625415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.625425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.625786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.625795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.626070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.626079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.626444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.626454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.626899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.626909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.627244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.627254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.627592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.627602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.627937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.627946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.628186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.628195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.628626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.628635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.628986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.628995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.629358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.629369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.629720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.629729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.630099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.630111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.630488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.630497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.630829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.630838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.631190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.631199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.631599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.631609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.631991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.632000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.632155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.632165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.632554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.632564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.632909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.632918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.633275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.633285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.633702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.633711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.634102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.634111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.634443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.634453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.634798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.634807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.635139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.635148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.635406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.635416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.635782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.635791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.636138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.636146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.636510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.636527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.636859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.636868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.637156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.637165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.637436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.637445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.637782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.637792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.519 [2024-06-10 10:54:34.637934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.519 [2024-06-10 10:54:34.637944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.519 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.638298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.638307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.638627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.638636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.638940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.638951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.639300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.639310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.639667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.639676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.640038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.640047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.640404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.640414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.640756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.640765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.641119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.641127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.641531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.641540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.641949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.641959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.642307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.642317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.642670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.642678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.643011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.643020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.643357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.643366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.643723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.643732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.644064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.644073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.644390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.644399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.644591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.644601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.644989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.644998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.645330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.645340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.645715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.645724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.646103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.646112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.646538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.646547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.646892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.646902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.647259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.647268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.647631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.647640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.648018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.648027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.648352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.648362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.648715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.648726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.649058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.649068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.649349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.649358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.649696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.649705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.650071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.650086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.650342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.650351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.520 [2024-06-10 10:54:34.650719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.520 [2024-06-10 10:54:34.650728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.520 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.650928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.650938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.651310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.651320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.651672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.651681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.651956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.651965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.652358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.652367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.652776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.652786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.653142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.653152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.653511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.653521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.653878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.653888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.654290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.654299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.654479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.654488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.654757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.654766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.655092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.655101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.655384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.655394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.655750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.655759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.655946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.655956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.656375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.656384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.656722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.656731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.657084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.657099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.657555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.657565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.657924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.657938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.658284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.658294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.658649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.658658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.658994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.659003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.659382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.659392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.659755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.659764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.660092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.660101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.660477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.660486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.660818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.660827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.661177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.661194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.661563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.661573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.661932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.661941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.662304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.662314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.521 [2024-06-10 10:54:34.662695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.521 [2024-06-10 10:54:34.662703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.521 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.663029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.663038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.663398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.663407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.663687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.663696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.664017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.664026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.664383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.664394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.664731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.664740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.665106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.665115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.665446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.665456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.665820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.665830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.666210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.666220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.666582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.666591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.666965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.666975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.667324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.667333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.667680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.667689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.668047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.668056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.668381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.668390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.668746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.668755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.669090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.669098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.669351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.669360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.669725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.669734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.670103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.670113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.670483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.670493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.670673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.670683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.670929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.670938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.671304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.671313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.671689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.671698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.672066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.672076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.672432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.672443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.672800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.672808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.673157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.673167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.673463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.673472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.673808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.673817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.674189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.674198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.674588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.674597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.674972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.674982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.675337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.522 [2024-06-10 10:54:34.675346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.522 qpair failed and we were unable to recover it. 00:29:10.522 [2024-06-10 10:54:34.675679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.675688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.676108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.676118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.676482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.676492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.676849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.676859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.677193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.677202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.677583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.677593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.677949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.677958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.678294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.678303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.678675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.678684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.679062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.679071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.679405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.679414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.679747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.679756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.680074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.680083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.680433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.680442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.680801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.680810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.681191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.681199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.681530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.681539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.681917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.681927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.682287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.682298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.682637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.682647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.683001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.683010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.683353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.683363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.683721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.683730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.684089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.684097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.684434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.684443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.684800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.684810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.685164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.685173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.685523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.685533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.685977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.685987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.686225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.686233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.523 [2024-06-10 10:54:34.686588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.523 [2024-06-10 10:54:34.686597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.523 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.686930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.686939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.687155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.687165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.687518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.687528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.687888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.687897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.688273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.688283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.688651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.688660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.688966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.688975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.689280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.689290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.689640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.689649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.690021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.690031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.690412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.690421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.690675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.690684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.691041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.691050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.691392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.691401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.691752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.691763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.692120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.692129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.692462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.692471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.692825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.692834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.693170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.693179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.693543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.693553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.693921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.693930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.694315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.694325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.694669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.694678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.694970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.694979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.695316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.695325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.695684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.695693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.696064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.696073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.696406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.696415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.696646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.696656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.697070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.697079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.697414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.697424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.697782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.697791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.698049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.698058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.698413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.698422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.698811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.698820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.524 [2024-06-10 10:54:34.699182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.524 [2024-06-10 10:54:34.699192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.524 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.699537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.699545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.699919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.699928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.700266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.700276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.700643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.700652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.700990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.700999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.701379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.701389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.701762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.701772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.702142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.702151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.702533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.702542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.702874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.702884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.703266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.703276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.703637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.703646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.703858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.703868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.704245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.704254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.704612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.704622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.704980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.704989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.705327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.705338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.705698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.705707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.706040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.706049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.706429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.706439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.706792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.706801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.707027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.707036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.707391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.707400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.707754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.707763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.708130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.708146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.708411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.708420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.708832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.708841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.709103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.709113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.709460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.709469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.709802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.709812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.710188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.710198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.710596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.710605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.710979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.710988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.711354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.711363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.711709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.711718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.712067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.712084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.525 [2024-06-10 10:54:34.712441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.525 [2024-06-10 10:54:34.712450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.525 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.712697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.712706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.713095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.713103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.713434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.713444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.713805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.713814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.714173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.714181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.714530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.714539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.714905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.714915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.715222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.715231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.715595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.715605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.715938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.715949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.716278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.716288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.716607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.716616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.716946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.716955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.717298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.717308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.717667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.717676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.718010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.718020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.718375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.718384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.718730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.718740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.719098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.719107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.719439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.719449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.719694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.719703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.720044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.720053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.720303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.720312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.720698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.720707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.721040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.721049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.721376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.721386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.721751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.721759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.722089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.722098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.722429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.722438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.722765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.722774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.723192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.723201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.723565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.723575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.723932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.723942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.724321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.724330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.724667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.724675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.724996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.725005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.725364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.725375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.725760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.526 [2024-06-10 10:54:34.725768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.526 qpair failed and we were unable to recover it. 00:29:10.526 [2024-06-10 10:54:34.726145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.726155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.726536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.726546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.726878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.726887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.727235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.727248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.727502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.727511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.727813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.727822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.728185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.728195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.728542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.728551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.728884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.728892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.729260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.729269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.729629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.729637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.729977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.729986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.730351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.730360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.730715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.730724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.731058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.731067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.731420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.731430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.731768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.731776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.732125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.732134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.732515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.732525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.732860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.732869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.733218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.733227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.733578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.733587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.733994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.734003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.734361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.734370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.734563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.734573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.734815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.734826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.735183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.735192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.735561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.735571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.735931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.735940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.736274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.736283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.736663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.736672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.737051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.737061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.737416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.527 [2024-06-10 10:54:34.737425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.527 qpair failed and we were unable to recover it. 00:29:10.527 [2024-06-10 10:54:34.737782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.737790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.738180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.738189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.738546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.738555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.738895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.738904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.739256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.739266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.739629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.739638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.739828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.739838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.740198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.740208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.740552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.740561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.740893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.740903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.741284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.741293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.741670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.741679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.742036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.742045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.742318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.742328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.742682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.742691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.743028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.743037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.743349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.743358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.743725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.743734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.744052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.744062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.744422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.744431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.744781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.744790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.745102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.745111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.745492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.745501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.745837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.745845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.746194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.746210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.746571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.746580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.746923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.746932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.747296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.747313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.747683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.747692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.748083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.748092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.748505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.748514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.528 [2024-06-10 10:54:34.748861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.528 [2024-06-10 10:54:34.748871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.528 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.749227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.749237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.749628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.749639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.749973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.749983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.750440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.750476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.750864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.750876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.751295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.751305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.751688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.751697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.752050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.752065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.752420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.752430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.752812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.752821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.753212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.753221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.753486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.753496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.753859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.753869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.754220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.754229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.754569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.754579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.754959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.754969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.755346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.755356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.755572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.755580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.755762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.755775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.756150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.756160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.756536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.756545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.756803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.756812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.757140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.757149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.757504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.757513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.757802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.757811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.758172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.758182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.758531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.758540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.758878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.758887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.759240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.759257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.759614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.759623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.759966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.759975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.760404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.760414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.760722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.760731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.761086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.761095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.761436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.761445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.529 [2024-06-10 10:54:34.761818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.529 [2024-06-10 10:54:34.761827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.529 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.762188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.762196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.762547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.762556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.762909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.762918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.763218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.763226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.763601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.763611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.763815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.763826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.764184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.764195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.764558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.764568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.764768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.764778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.765209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.765218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.765571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.765581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.765931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.765940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.766296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.766305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.766798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.766807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.767153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.767162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.767524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.767533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.767893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.767903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.768161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.768170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.768548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.768558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.768952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.768963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.769326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.769335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.769694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.769703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.770046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.770056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.770407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.770416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.770748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.770757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.771110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.771125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.771507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.771516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.771790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.771800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.772158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.772167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.772510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.772520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.772901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.772910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.773265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.773275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.773632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.773641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.530 qpair failed and we were unable to recover it. 00:29:10.530 [2024-06-10 10:54:34.773974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.530 [2024-06-10 10:54:34.773983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.774335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.774344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.774697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.774706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.775037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.775046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.775421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.775431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.775784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.775793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.776129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.776138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.776525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.776536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.776891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.776899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.777234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.777285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.777658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.777668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.777861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.777871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.778249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.778259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.778630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.778639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.778903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.778912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.779288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.779297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.779635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.779644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.780030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.780039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.780383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.780392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.780761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.780770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.781168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.781177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.781528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.781537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.781893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.781903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.782262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.782271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.782665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.782673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.783064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.783073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.783434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.783443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.783766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.783775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.531 [2024-06-10 10:54:34.784080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.531 [2024-06-10 10:54:34.784089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.531 qpair failed and we were unable to recover it. 00:29:10.806 [2024-06-10 10:54:34.784458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.806 [2024-06-10 10:54:34.784470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.806 qpair failed and we were unable to recover it. 00:29:10.806 [2024-06-10 10:54:34.784801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.806 [2024-06-10 10:54:34.784810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.806 qpair failed and we were unable to recover it. 00:29:10.806 [2024-06-10 10:54:34.785180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.806 [2024-06-10 10:54:34.785196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.806 qpair failed and we were unable to recover it. 00:29:10.806 [2024-06-10 10:54:34.785449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.785458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.785849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.785858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.786249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.786259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.786592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.786607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.786936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.786946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.787275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.787285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.787682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.787691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.787944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.787953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.788248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.788258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.788596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.788605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.788848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.788857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.789239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.789264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.789605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.789614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.789965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.789974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.790194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.790203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.790586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.790596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.790928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.790937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.791293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.791302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.791670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.791679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.791932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.791941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.792303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.792312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.792632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.792642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.793005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.793016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.793350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.793360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.793717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.793726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.794061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.794069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.794532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.807 [2024-06-10 10:54:34.794541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.807 qpair failed and we were unable to recover it. 00:29:10.807 [2024-06-10 10:54:34.794804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.794813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.795059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.795069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.795441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.795450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.795773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.795783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.796140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.796149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.796539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.796548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.796894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.796903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.797261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.797271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.797603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.797612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.797962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.797971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.798312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.798322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.798658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.798667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.798922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.798931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.799277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.799286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.799628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.799637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.799841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.799851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.800330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.800339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.800699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.800708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.801046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.801055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.801218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.801226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.801555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.801564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.801933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.801942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.802403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.802417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.802782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.802791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.803075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.803084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.803376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.803385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.803752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.808 [2024-06-10 10:54:34.803761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.808 qpair failed and we were unable to recover it. 00:29:10.808 [2024-06-10 10:54:34.803965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.803975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.804335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.804344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.804673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.804682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.805040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.805049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.805408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.805418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.805734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.805743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.806003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.806011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.806323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.806332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.806582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.806592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.806967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.806977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.807335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.807344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.807714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.807723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.808086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.808095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.808457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.808466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.808734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.808743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.809125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.809134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.809478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.809487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.809746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.809754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.810092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.810101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.810464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.810473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.810829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.810838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.811182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.811191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.811557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.811567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.812006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.812015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.812374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.812383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.812745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.812754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.813097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.813106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.809 qpair failed and we were unable to recover it. 00:29:10.809 [2024-06-10 10:54:34.813482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.809 [2024-06-10 10:54:34.813491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.813833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.813842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.814084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.814093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.814461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.814470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.814808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.814817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.815188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.815199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.815556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.815565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.815912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.815921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.816314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.816324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.816682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.816691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.817028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.817037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.817387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.817396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.817818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.817826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.818165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.818174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.818550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.818559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.818915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.818924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.819218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.819227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.819488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.819497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.819746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.819755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.820134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.820143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.820547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.820556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.820923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.820932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.821189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.821198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.821567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.821576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.821952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.821962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.822327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.822336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.822691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.822700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.810 qpair failed and we were unable to recover it. 00:29:10.810 [2024-06-10 10:54:34.823065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.810 [2024-06-10 10:54:34.823074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.823439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.823448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.823812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.823822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.824157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.824165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.824511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.824520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.824880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.824889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.825249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.825258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.825597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.825607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.826007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.826015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.826374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.826385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.826795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.826803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.827166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.827175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.827377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.827386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.827619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.827627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.827948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.827957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.828319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.828328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.828685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.828694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.829050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.829059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.829418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.829427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.829774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.829784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.830152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.830161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.830519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.830528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.830876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.830886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.831251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.831261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.831620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.831628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.832062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.832071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.832412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.811 [2024-06-10 10:54:34.832421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.811 qpair failed and we were unable to recover it. 00:29:10.811 [2024-06-10 10:54:34.832774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.832783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.833141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.833149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.833416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.833425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.833670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.833679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.834054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.834063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.834421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.834430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.834820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.834829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.835187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.835196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.835615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.835624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.835991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.836003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.836233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.836246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.836631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.836640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.836932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.836941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.837235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.837247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.837589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.837598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.837829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.837838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.838209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.838218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.838655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.838664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.839079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.839089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.839536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.839573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.839975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.839987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.840442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.840479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.840758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.840770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.840999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.841008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.841388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.841398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.841637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.841646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.842004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.842013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.842368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.842377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.812 [2024-06-10 10:54:34.842738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.812 [2024-06-10 10:54:34.842747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.812 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.843103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.843113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.843424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.843433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.843777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.843786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.844069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.844079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.844443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.844452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.844843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.844852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.845166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.845175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.845433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.845445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.845804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.845813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.846062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.846071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.846419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.846428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.846793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.846802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.847170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.847180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.847538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.847548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.847921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.847930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.848222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.848231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.848669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.848679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.848850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.848860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.849218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.849227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.849568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.849577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.813 [2024-06-10 10:54:34.849856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.813 [2024-06-10 10:54:34.849865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.813 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.850201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.850210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.850568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.850578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.850953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.850963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.851324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.851333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.851714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.851722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.852057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.852066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.852428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.852437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.852828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.852836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.853175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.853184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.853556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.853566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.853923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.853933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.854253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.854263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.854608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.854617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.854952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.854961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.855325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.855334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.855684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.855693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.856078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.856087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.856443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.856452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.856742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.856751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.857109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.857118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.857467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.857476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.857844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.857853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.858218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.858227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.858591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.858600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.858947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.858956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.859324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.859333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.859677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.814 [2024-06-10 10:54:34.859686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.814 qpair failed and we were unable to recover it. 00:29:10.814 [2024-06-10 10:54:34.860073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.860083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.860347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.860357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.860731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.860741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.861098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.861108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.861461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.861470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.861780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.861789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.862190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.862198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.862624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.862632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.862978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.862987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.863409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.863418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.863878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.863887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.864218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.864227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.864570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.864579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.864924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.864933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.865265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.865274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.865680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.865690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.866120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.866129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.866489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.866498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.866878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.866886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.867229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.867238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.867516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.867525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.867909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.867918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.868282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.868291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.868648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.868657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.868958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.868967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.869304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.869314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.815 [2024-06-10 10:54:34.869574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.815 [2024-06-10 10:54:34.869583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.815 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.870019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.870030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.870331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.870340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.870720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.870728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.871077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.871086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.871474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.871487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.871842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.871851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.872190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.872199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.872676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.872685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.873000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.873009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.873274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.873284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.873661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.873670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.873863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.873874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.874118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.874128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.874590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.874599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.874945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.874954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.875309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.875319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.875678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.875688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.876006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.876015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.876372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.876382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.876755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.876764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.877020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.877029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.877384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.877394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.877737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.877747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.878141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.878150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.878495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.878504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.878852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.878861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.879110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.879119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.879462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.879474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.879839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.879848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.880203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.880212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.880551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.880560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.881003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.881012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.881346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.881356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.881718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.881727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.882097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.882107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.816 qpair failed and we were unable to recover it. 00:29:10.816 [2024-06-10 10:54:34.882461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.816 [2024-06-10 10:54:34.882470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.882803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.882812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.883143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.883154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.883528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.883540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.883876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.883885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.884251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.884261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.884630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.884639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.884972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.884982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.885339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.885348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.885637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.885647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.886009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.886018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.886396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.886406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.886735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.886744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.886990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.886999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.887365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.887374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.887731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.887740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.888073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.888082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.888434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.888443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.888647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.888658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.889011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.889021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.889261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.889271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.889653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.889663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.890094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.890103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.890462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.890471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.890820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.890829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.891191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.891200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.891555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.891564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.891912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.891921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.892227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.892236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.892600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.892610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.892967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.892975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.893312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.893322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.817 [2024-06-10 10:54:34.893695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.817 [2024-06-10 10:54:34.893705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.817 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.894061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.894070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.894441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.894450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.894842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.894851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.895224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.895234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.895597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.895607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.895978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.895987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.896321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.896330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.896685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.896694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.897029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.897039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.897347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.897356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.897731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.897740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.898096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.898105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.898439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.898448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.898819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.898836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.899208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.899217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.899550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.899559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.899760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.899770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.900085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.900094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.900465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.900475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.900829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.900844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.901197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.901206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.901497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.901507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.901871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.901880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.902213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.902221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.902576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.902585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.902942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.902951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.903303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.903312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.903675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.903687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.903948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.903958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.904293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.904302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.904666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.904675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.905085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.818 [2024-06-10 10:54:34.905094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.818 qpair failed and we were unable to recover it. 00:29:10.818 [2024-06-10 10:54:34.905455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.905465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.905700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.905709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.906054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.906063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.906332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.906341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.906599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.906608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.906966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.906976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.907355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.907364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.907711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.907719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.908074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.908083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.908340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.908350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.908696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.908706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.909010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.909019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.909353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.909363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.909716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.909725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.910060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.910069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.910403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.910412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.910644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.910654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.911110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.911119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.911464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.911474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.911832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.911841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.912175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.912183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.912538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.912548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.912920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.912931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.913265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.913274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.913625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.913643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.913992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.914000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.914337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.914348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.914744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.914753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.914940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.914950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.915152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.915162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.915508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.915517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.915923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.915932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.916180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.916189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.819 [2024-06-10 10:54:34.916549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.819 [2024-06-10 10:54:34.916558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.819 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.916898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.916907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.917314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.917324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.917662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.917671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.918005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.918014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.918392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.918401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.918763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.918773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.919129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.919138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.919501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.919510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.919898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.919908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.920266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.920275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.920538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.920547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.920906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.920915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.921256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.921266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.921643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.921652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.922007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.922016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.922348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.922359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.922606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.922615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.922981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.922990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.923325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.923334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.923698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.923707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.924088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.924097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.924426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.924436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.924789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.924798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.925135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.925145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.925536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.925545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.925918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.925927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.926234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.926247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.926642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.926651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.926909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.926918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.927298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.927308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.927665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.927673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.820 [2024-06-10 10:54:34.927920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.820 [2024-06-10 10:54:34.927929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.820 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.928343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.928352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.928689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.928698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.929045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.929054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.929388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.929398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.929732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.929741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.930090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.930105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.930444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.930453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.930831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.930840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.931224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.931234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.931619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.931628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.931999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.932008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.932240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.932254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.932584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.932593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.932897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.932906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.933275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.933285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.933644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.933653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.934002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.934011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.934369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.934378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.934728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.934737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.935101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.935117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.935489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.935498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.935847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.935856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.936210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.936226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.936587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.936596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.936935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.936944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.937313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.937322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.937678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.937687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.938020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.938028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.938412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.938421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.938775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.938784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.939140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.939149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.939523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.939532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.821 [2024-06-10 10:54:34.939889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.821 [2024-06-10 10:54:34.939898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.821 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.940222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.940231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.940496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.940505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.940846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.940856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.941236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.941250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.941599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.941607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.941979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.941988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.942312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.942323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.942693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.942702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.943085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.943094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.943462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.943472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.943752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.943760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.944098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.944106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.944432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.944442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.944836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.944845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.945170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.945179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.945428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.945436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.945794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.945803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.946181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.946190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.946598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.946610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.946951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.946961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.947318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.947328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.947690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.947707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.948061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.948070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.948405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.948414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.948795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.948804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.949161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.949170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.949513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.949522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.949867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.949876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.950232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.950241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.950571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.950580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.950935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.950952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.951076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.951087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.951460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.951470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.951803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.951812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.952168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.952178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.952531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.952540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.822 [2024-06-10 10:54:34.952873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.822 [2024-06-10 10:54:34.952882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.822 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.953241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.953254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.953578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.953587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.953925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.953934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.954382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.954391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.954723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.954732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.954937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.954948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.955307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.955317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.955583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.955592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.955926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.955938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.956296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.956305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.956651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.956660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.956921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.956931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.957287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.957296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.957653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.957662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.957892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.957901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.958236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.958248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.958590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.958598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.958983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.958992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.959313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.959322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.959679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.959688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.960019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.960028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.960382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.960392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.960807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.960816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.961171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.961180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.961548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.961557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.961933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.961942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.962322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.962331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.962694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.962711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.963059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.963068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.963359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.963368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.963728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.963737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.964069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.964078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.964334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.964343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.964700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.823 [2024-06-10 10:54:34.964710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.823 qpair failed and we were unable to recover it. 00:29:10.823 [2024-06-10 10:54:34.965104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.965113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.965486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.965495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.965856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.965866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.966226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.966235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.966586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.966595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.966945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.966962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.967316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.967335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.967672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.967681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.968036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.968046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.968442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.968452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.968785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.968793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.969152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.969161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.969508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.969517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.969860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.969869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.970229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.970239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.970629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.970638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.971010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.971020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.971404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.971414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.971744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.971754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.972104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.972113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.972451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.972461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.972821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.972830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.973170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.973178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.973544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.973554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.973928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.973936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.974269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.974278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.974634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.974643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.974894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.974903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.975110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.975120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.975402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.975413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.975762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.824 [2024-06-10 10:54:34.975771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.824 qpair failed and we were unable to recover it. 00:29:10.824 [2024-06-10 10:54:34.976110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.976119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.976464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.976473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.976831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.976840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.977181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.977190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.977567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.977577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.977954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.977964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.978325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.978334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.978685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.978693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.978991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.979000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.979378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.979387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.979743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.979751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.980105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.980117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.980499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.980508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.980849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.980858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.981222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.981238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.981581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.981590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.981929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.981938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.982292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.982301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.982653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.982662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.982996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.983004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.983301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.983311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.983682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.983691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.984022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.984031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.984267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.984277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.984632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.984641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.984975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.984985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.985340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.985349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.985676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.985686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.986040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.986048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.986426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.986435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.986800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.986810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.987163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.987172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.987588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.987597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.987971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.987980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.988374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.988392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.988579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.988589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.825 qpair failed and we were unable to recover it. 00:29:10.825 [2024-06-10 10:54:34.988959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.825 [2024-06-10 10:54:34.988968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.989177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.989188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.989390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.989403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.989762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.989772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.990121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.990130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.990419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.990429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.990783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.990793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.991149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.991159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.991523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.991533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.991841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.991850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.992222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.992232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.992589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.992600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.992953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.992963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.993345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.993355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.993751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.993761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.994095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.994103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.994490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.994501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.994855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.994864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.995193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.995202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.995563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.995573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.995922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.995930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.996309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.996320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.996675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.996684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.997016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.997025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.997377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.997387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.997739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.997748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.998082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.998091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.998446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.998457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.998816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.998825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.999202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.999212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.999569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.999580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:34.999973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:34.999983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:35.000340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:35.000351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:35.000711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:35.000719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:35.001086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:35.001096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:35.001451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:35.001460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.826 qpair failed and we were unable to recover it. 00:29:10.826 [2024-06-10 10:54:35.001814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.826 [2024-06-10 10:54:35.001823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.002169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.002178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.002526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.002535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.002780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.002790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.003150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.003159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.003515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.003525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.003885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.003895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.004322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.004332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.004581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.004590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.004975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.004984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.005317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.005326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.005700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.005709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.006072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.006081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.006433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.006442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.006784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.006793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.007155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.007165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.007527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.007537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.007876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.007885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.008265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.008275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.008646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.008655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.009031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.009041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.009385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.009394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.009769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.009778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.010137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.010147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.010502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.010511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.010899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.010909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.011265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.011274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.011638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.011648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.012043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.012052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.012398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.012407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.012765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.012774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.013114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.013124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.013505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.013515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.013851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.013859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.014217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.014226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.827 [2024-06-10 10:54:35.014596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.827 [2024-06-10 10:54:35.014606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.827 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.014938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.014948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.015307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.015317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.015681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.015691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.016024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.016033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.016353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.016362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.016731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.016740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.017091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.017100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.017441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.017451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.017809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.017818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.018154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.018163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.018421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.018431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.018793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.018803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.019181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.019190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.019566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.019575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.019946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.019956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.020314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.020323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.020687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.020702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.020999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.021008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.021391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.021401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.021767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.021777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.022078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.022087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.022453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.022463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.022831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.022840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.023175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.023184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.023541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.023551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.023912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.023923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.024268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.024279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.024640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.024649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.024983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.024992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.025356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.025365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.025622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.025631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.025972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.025981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.026348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.828 [2024-06-10 10:54:35.026357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.828 qpair failed and we were unable to recover it. 00:29:10.828 [2024-06-10 10:54:35.026729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.026738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.027147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.027156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.027420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.027431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.027792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.027801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.028179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.028189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.028448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.028457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.028705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.028714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.029070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.029080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.029459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.029468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.029815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.029826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.030189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.030198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.030464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.030474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.030844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.030854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.031190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.031199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.031496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.031506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.031872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.031881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.032253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.032262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.032615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.032631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.032989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.032998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.033328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.033340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.033682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.033691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.034058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.034068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.034400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.034410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.034769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.034778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.035123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.035132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.035463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.035473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.035827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.035836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.036170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.036178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.036532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.036542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.036899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.036908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.037282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.037292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.037580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.037589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.037969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.037978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.829 [2024-06-10 10:54:35.038329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.829 [2024-06-10 10:54:35.038338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.829 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.038664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.038673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.039041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.039050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.039416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.039425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.039825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.039835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.040183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.040192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.040526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.040535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.040918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.040928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.041284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.041294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.041487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.041496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.041875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.041884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.042219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.042228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.042567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.042576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.042934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.042943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.043134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.043145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.043499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.043509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.043842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.043851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.044220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.044230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.044590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.044600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.044853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.044862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.045207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.045217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.045573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.045583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.045915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.045925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.046307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.046318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.046602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.046611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.046936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.046945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.047279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.047288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.047656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.047665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.048022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.048031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.048369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.048379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.048841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.048850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.049236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.049249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.049613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.049621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.049967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.049977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.050346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.050355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.050696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.050705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.830 [2024-06-10 10:54:35.051047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.830 [2024-06-10 10:54:35.051056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.830 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.051324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.051333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.051758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.051767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.052112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.052121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.052349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.052359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.052751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.052761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.053009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.053019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.053337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.053347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.053689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.053700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.054136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.054145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.054490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.054507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.054881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.054890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.055224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.055233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.055617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.055627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.055988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.055997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.056341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.056352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.056748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.056760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.057029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.057039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.057400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.057412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.057792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.057802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.058155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.058166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.058529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.058538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.058897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.058907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.059286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.059296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.059650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.059659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.060016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.060025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.060370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.060380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.060749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.060758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.061065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.061074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.061416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.061425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.061765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.061774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.062115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.062124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.062473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.062482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.062841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.062850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.063241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.063254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.063595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.063605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.831 [2024-06-10 10:54:35.063964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.831 [2024-06-10 10:54:35.063973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.831 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.064390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.064400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.064710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.064720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.065069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.065079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.065279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.065290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.065735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.065744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.066057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.066066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.066441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.066450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.066700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.066709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.066969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.066980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.067327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.067336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.067647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.067656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.067888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.067897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.068257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.068268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.068628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.068640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.068920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.068929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.069163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.069172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.069529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.069539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.069899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.069908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.070237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.070250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.070589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.070598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.070936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.070945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.071300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.071309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.071645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.071654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.072067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.072077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.072418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.072427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.072840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.072849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.073212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.073221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.073562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.073571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.073913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.073923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.074280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.074289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.074528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.074537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.074766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.074776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.075135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.075144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.075432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.075442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.075814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.075823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.076087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.076100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.076337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.832 [2024-06-10 10:54:35.076347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.832 qpair failed and we were unable to recover it. 00:29:10.832 [2024-06-10 10:54:35.076688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.076698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:10.833 [2024-06-10 10:54:35.077077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.077088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:10.833 [2024-06-10 10:54:35.077440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.077449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:10.833 [2024-06-10 10:54:35.077803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.077812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:10.833 [2024-06-10 10:54:35.078220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.078231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:10.833 [2024-06-10 10:54:35.078459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.078468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:10.833 [2024-06-10 10:54:35.078814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.078823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:10.833 [2024-06-10 10:54:35.078928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.078937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:10.833 [2024-06-10 10:54:35.079351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.079362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:10.833 [2024-06-10 10:54:35.079763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.833 [2024-06-10 10:54:35.079773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:10.833 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.079988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.080000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.080337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.080347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.080712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.080730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.081174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.081184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.081515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.081524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.081893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.081903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.082268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.082277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.082684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.082694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.083063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.083072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.083416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.083426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.083708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.083718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.083982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.083992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.084353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.084363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.084717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.084726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.085036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.108 [2024-06-10 10:54:35.085046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.108 qpair failed and we were unable to recover it. 00:29:11.108 [2024-06-10 10:54:35.085460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.085469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.085697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.085707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.085974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.085983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.086318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.086327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.086683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.086694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.087090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.087099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.087441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.087451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.087828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.087842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.088236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.088249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.088560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.088569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.088820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.088831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.089207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.089216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.089582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.089592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.089960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.089970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.090382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.090392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.090740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.090749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.091146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.091156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.091390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.091399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.091648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.091657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.091985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.091994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.092340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.092349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.092732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.092741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.093069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.093079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.093438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.093448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.093773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.093782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.094135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.094144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.094511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.094520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.094876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.094885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.095269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.095278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.095628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.095638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.095980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.095989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.096363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.096372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.096734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.096743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.097097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.097106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.097596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.109 [2024-06-10 10:54:35.097606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.109 qpair failed and we were unable to recover it. 00:29:11.109 [2024-06-10 10:54:35.097968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.097978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.098304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.098313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.098562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.098571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.098958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.098966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.099197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.099206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.099532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.099542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.099940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.099952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.100218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.100228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.100568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.100578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.100814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.100823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.101141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.101150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.101510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.101520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.101865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.101874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.102342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.102351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.102776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.102785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.103155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.103164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.103540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.103550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.103973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.103982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.104341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.104351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.104664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.104673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.105037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.105047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.105405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.105415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.105753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.105762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.106127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.106137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.106508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.106517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.106873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.106883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.107320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.107329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.107664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.107673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.107914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.107923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.108288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.108297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.108629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.108645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.109031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.109040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.109411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.109421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.109790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.109801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.110069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.110078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.110425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.110434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.110 [2024-06-10 10:54:35.110784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.110 [2024-06-10 10:54:35.110793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.110 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.111144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.111153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.111513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.111523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.111881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.111890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.112198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.112208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.112604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.112613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.112955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.112964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.113357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.113367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.113714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.113723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.113969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.113978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.114358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.114367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.114772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.114781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.115114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.115123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.115462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.115471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.115849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.115859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.116220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.116230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.116577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.116586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.116917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.116926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.117147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.117156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.117547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.117556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.117836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.117851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.118203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.118213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.118608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.118617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.118982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.118992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.119348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.119358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.119530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.119540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.119927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.119937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.120355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.120364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.120739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.120749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.121141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.121151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.121504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.121514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.121889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.121899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.122256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.122265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.122627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.122636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.111 qpair failed and we were unable to recover it. 00:29:11.111 [2024-06-10 10:54:35.122984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.111 [2024-06-10 10:54:35.122993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.123344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.123354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.123721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.123729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.124090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.124099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.124486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.124496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.124853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.124862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.125238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.125254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.125605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.125615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.125999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.126009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.126377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.126386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.126780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.126797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.127103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.127112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.127462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.127471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.127828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.127838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.128175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.128184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.128553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.128571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.129000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.129009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.129340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.129349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.129765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.129775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.130104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.130114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.130466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.130475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.130748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.130757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.131118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.131127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.131381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.131390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.131743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.131752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.131990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.131999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.132358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.132368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.132704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.132713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.133110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.133119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.133415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.133424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.133838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.133847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.134180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.134191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.134479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.134488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.134858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.134867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.135235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.135247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.135614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.112 [2024-06-10 10:54:35.135623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.112 qpair failed and we were unable to recover it. 00:29:11.112 [2024-06-10 10:54:35.135967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.135976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.136191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.136202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.136570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.136579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.136912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.136921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.137278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.137287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.137661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.137670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.138000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.138010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.138369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.138379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.138790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.138799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.139133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.139142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.139375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.139384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.139756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.139765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.140038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.140048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.140406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.140416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.140777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.140786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.141150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.141159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.141505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.141514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.141800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.141809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.142157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.142166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.142528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.142539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.142913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.142922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.143255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.143264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.143629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.143641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.143995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.144003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.144193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.144203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.144524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.144534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.144868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.144878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.145151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.145160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.145414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.145424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.145827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.145837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.113 qpair failed and we were unable to recover it. 00:29:11.113 [2024-06-10 10:54:35.146186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.113 [2024-06-10 10:54:35.146196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.146553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.146563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.146895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.146905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.147261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.147271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.147643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.147651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.147984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.147993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.148381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.148392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.148754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.148763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.149136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.149145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.149527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.149537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.149846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.149855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.150291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.150301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.150524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.150533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.150891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.150900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.151275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.151285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.151550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.151559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.151925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.151934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.152269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.152279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.152512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.152521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.152901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.152912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.153249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.153259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.153570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.153579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.153916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.153924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.154194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.154203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.154557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.154567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.154899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.154908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.155370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.155380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.155741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.155750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.156106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.156115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.156450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.156459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.156817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.156826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.157175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.157184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.157532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.157542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.157923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.157933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.114 [2024-06-10 10:54:35.158313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.114 [2024-06-10 10:54:35.158323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.114 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.158659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.158668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.159015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.159025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.159386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.159395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.159730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.159746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.160095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.160104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.160463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.160473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.160698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.160707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.161096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.161105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.161445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.161455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.161813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.161822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.162157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.162166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.162523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.162533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.162904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.162913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.163286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.163296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.163665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.163675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.164053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.164062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.164403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.164412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.164745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.164754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.165115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.165123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.165541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.165550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.165885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.165894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.166250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.166260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.166493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.166503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.166839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.166848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.167199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.167215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.167561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.167570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.167819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.167827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.168221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.168232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.168636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.168646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.168897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.168907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.169263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.169273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.169635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.169644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.169895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.115 [2024-06-10 10:54:35.169904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.115 qpair failed and we were unable to recover it. 00:29:11.115 [2024-06-10 10:54:35.170264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.170274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.170627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.170636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.171023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.171033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.171394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.171403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.171640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.171649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.171960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.171970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.172266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.172275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.172635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.172643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.172976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.172985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.173267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.173277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.173661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.173670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.174004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.174013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.174400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.174410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.174771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.174780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.175119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.175128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.175517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.175527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.175912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.175922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.176376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.176386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.176762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.176771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.177136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.177148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.177530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.177540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.177929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.177938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.178324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.178334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.178694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.178703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.178977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.178986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.179341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.179351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.179688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.179697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.180052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.180063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.180421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.180431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.180735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.180744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.181125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.181134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.181543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.181553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.181893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.181902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.182299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.182309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.182664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.182674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.116 [2024-06-10 10:54:35.183014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.116 [2024-06-10 10:54:35.183023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.116 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.183380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.183390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.183743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.183753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.184132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.184140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.184370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.184380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.184741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.184751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.185081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.185091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.185440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.185449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.185804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.185812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.186148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.186157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.186481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.186490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.186861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.186872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.187205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.187214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.187581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.187590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.187945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.187954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.188291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.188301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.188593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.188602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.188977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.188986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.189310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.189319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.189677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.189686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.190026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.190035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.190342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.190351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.190717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.190726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.191084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.191094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.191442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.191452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.191792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.191800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.192174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.192183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.192536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.192545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.192879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.192888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.193319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.193329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.193581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.193590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.193953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.193962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.194299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.194308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.194669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.194678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.195033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.195042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.195378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.195388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.195756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.195766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.196120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.196128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.117 [2024-06-10 10:54:35.196453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.117 [2024-06-10 10:54:35.196463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.117 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.196803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.196813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.197066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.197075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.197441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.197450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.197893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.197903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.198283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.198292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.198656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.198665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.199025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.199034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.199369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.199379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.199570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.199580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.199812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.199821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.200182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.200191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.200442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.200451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.200768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.200776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.201037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.201047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.201308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.201318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.201675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.201685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.201916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.201926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.202188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.202198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.202544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.202554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.202916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.202926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.203303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.203313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.203660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.203669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.203970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.203979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.204337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.204346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.204694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.204703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.205146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.205156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.205542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.205551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.118 [2024-06-10 10:54:35.205923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.118 [2024-06-10 10:54:35.205933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.118 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.206197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.206207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.206562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.206571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.206929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.206939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.207292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.207302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.207662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.207672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.208030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.208039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.208459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.208468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.208803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.208812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.209156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.209165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.209520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.209530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.209725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.209735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.210102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.210111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.210447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.210458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.210831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.210840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.211195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.211205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.211567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.211578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.211990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.211999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.212361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.212371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.212756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.212765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.213106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.213115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.213497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.213508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.213732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.213741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.214093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.214103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.214461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.214470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.214809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.214817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.215179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.215189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.215561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.215570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.215901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.215909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.216240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.216253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.216586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.216596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.216973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.216982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.217315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.217324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.217678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.217687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.218059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.218069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.218405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.218414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.119 [2024-06-10 10:54:35.218783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.119 [2024-06-10 10:54:35.218791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.119 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.219206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.219215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.219573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.219582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.219990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.220000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.220330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.220341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.220699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.220708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.221068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.221077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.221412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.221421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.221823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.221833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.222042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.222052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.222466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.222476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.222809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.222818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.223186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.223195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.223541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.223550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.223881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.223890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.224093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.224104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.224471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.224481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.224853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.224862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.225197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.225206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.225558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.225568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.225930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.225940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.226274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.226283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.226658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.226667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.227019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.227028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.227403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.227412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.227755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.227765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.228120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.228129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.228465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.228475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.228778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.228786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.228991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.229001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.229370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.229380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.229720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.229733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.230089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.230098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.230510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.230519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.230883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.230893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.231251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.231260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.231681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.120 [2024-06-10 10:54:35.231690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.120 qpair failed and we were unable to recover it. 00:29:11.120 [2024-06-10 10:54:35.232023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.232031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.232435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.232473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.232867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.232879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.233238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.233262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.233622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.233632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.233943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.233953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.234317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.234327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.234664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.234673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.235070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.235080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.235453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.235471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.235828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.235837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.236170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.236180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.236537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.236546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.236758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.236767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.236986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.236996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.237375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.237384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.237720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.237729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.238087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.238097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.238452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.238461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.238795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.238804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.239046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.239055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.239446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.239456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.239847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.239856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.240191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.240200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.240573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.240584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.240960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.240970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.241318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.241328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.241674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.241683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.242036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.242052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.242313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.242323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.242682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.242692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.243049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.243058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.243298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.243307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.243665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.243675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.243869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.243880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.244145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.244155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.244614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.244624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.121 qpair failed and we were unable to recover it. 00:29:11.121 [2024-06-10 10:54:35.244976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.121 [2024-06-10 10:54:35.244985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.245305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.245315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.245664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.245674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.246029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.246038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.246386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.246395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.246840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.246850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.247213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.247223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.247581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.247591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.247864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.247873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.248230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.248239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.248491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.248500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.248900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.248909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.249162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.249172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.249528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.249537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.249869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.249878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.250230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.250248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.250636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.250645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.250834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.250844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.251107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.251116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.251514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.251523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.251858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.251868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.252225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.252234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.252581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.252590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.252889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.252898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.253167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.253177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.253539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.253552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.253913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.253922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.254280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.254289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.254648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.254657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.254960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.254969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.255326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.255336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.122 [2024-06-10 10:54:35.255673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.122 [2024-06-10 10:54:35.255682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.122 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.255930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.255938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.256300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.256309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.256649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.256659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.256924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.256932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.257274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.257284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.257658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.257668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.257931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.257940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.258326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.258336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.258667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.258677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.259105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.259114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.259360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.259369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.259761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.259771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.260151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.260161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.260521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.260530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.260866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.260876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.261129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.261138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.261505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.261515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.261872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.261882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.262239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.262252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.262650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.262659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.263003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.263014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.263319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.263329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.263694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.263703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.264039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.264048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.264390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.264400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.264663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.264672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.265025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.265034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.265388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.265398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.265728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.265737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.266069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.266079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.123 [2024-06-10 10:54:35.266327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.123 [2024-06-10 10:54:35.266337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.123 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.266567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.266579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.266958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.266967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.267298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.267308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.267635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.267644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.268044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.268053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.268391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.268401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.268768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.268777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.269136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.269145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.269501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.269511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.269770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.269779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.270019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.270028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.270384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.270394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.270712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.270721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.271122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.271131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.271425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.271434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.271823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.271832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.272190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.272200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.272540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.272551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.272902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.272911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.273321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.273330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.273682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.273691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.274078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.274088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.274342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.274351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.274696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.274705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.275093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.275103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.275458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.275467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.275859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.275868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.276080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.276089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.276446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.276455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.276801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.276811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.277189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.277199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.277565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.277575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.277811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.277820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.278197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.278206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.278563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.278573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.278905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.278914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.124 qpair failed and we were unable to recover it. 00:29:11.124 [2024-06-10 10:54:35.279169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.124 [2024-06-10 10:54:35.279179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.279573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.279583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.279943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.279952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.280259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.280268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.280628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.280638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.281055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.281064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.281405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.281415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.281752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.281761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.282115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.282124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.282461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.282471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.282828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.282837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.283213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.283223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.283556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.283565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.283903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.283912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.284279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.284288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.284649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.284658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.285019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.285029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.285405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.285415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.285770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.285779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.286143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.286152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.286528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.286537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.286910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.286921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.287288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.287299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.287666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.287675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.287957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.287967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.288341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.288353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.288689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.288699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.289144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.289154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.289579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.289588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.289944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.289952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.290305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.290314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.125 [2024-06-10 10:54:35.290608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.125 [2024-06-10 10:54:35.290617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.125 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.291001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.291010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.291347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.291357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.291704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.291713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.292072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.292082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.292442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.292452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.292827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.292836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.293200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.293210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.293571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.293581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.293918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.293927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.294280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.294290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.294704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.294713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.295078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.295089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.295439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.295448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.295655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.295666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.296025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.296034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.296355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.296365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.296791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.296803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.297179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.297189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.297566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.297575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.297910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.297919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.298280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.298289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.298668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.298676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.299052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.299061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.299419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.299429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.299805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.299814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.300181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.300191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.300461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.300471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.300856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.300866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.301254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.301265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.301603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.301614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.301985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.301994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.302361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.302372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.302609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.302618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.302983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.302993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.303355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.303365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.303695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.303705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.304060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.304070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.126 [2024-06-10 10:54:35.304445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.126 [2024-06-10 10:54:35.304455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.126 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.304816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.304826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.305187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.305197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.305528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.305539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.305924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.305933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.306083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.306093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.306459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.306474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.306875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.306885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.307269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.307279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.307717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.307727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.308087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.308097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.308321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.308331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.308586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.308596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.309001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.309010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.309448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.309458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.309831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.309842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.310227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.310237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.310589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.310599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.310972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.310982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.311375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.311386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.311754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.311765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.312125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.312135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.312521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.312532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.312895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.312904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.313330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.313340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.313709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.313720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.314090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.314100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.314459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.314469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.314831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.314841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.315199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.315209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.315571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.315581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.315942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.315952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.316213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.316223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.316595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.316605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.316942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.316951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.317309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.317319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.317715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.317725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.127 qpair failed and we were unable to recover it. 00:29:11.127 [2024-06-10 10:54:35.318031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.127 [2024-06-10 10:54:35.318041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.318412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.318421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.318636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.318646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.319031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.319040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.319451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.319461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.319802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.319811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.320184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.320193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.320536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.320546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.320922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.320931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.321150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.321159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.321531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.321544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.321922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.321930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.322265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.322275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.322674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.322685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.322910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.322919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.323325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.323335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.323700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.323709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.324012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.324021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.324356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.324366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.324810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.324819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.325085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.325095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.325437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.325447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.325789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.325798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.326162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.326171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.326536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.326545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.326989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.326998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.327341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.327351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.327715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.327724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.328081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.328090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.328442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.128 [2024-06-10 10:54:35.328452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.128 qpair failed and we were unable to recover it. 00:29:11.128 [2024-06-10 10:54:35.328837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.328846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.329190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.329200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.329685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.329695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.329905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.329916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.330181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.330191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.330569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.330578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.330970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.330980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.331339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.331351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.331708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.331718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.332093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.332102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.332334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.332343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.332714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.332723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.333128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.333138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.333500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.333511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.333874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.333884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.334139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.334148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.334484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.334494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.334837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.334846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.335207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.335216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.335588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.335599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.335966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.335977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.336363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.336373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.336719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.336729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.337105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.337114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.337464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.337474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.337742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.337752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.338103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.338113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.338469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.338479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.338741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.338750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.339098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.339107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.339568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.339578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.339942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.339952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.340314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.340324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.340636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.340645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.340912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.340924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.129 qpair failed and we were unable to recover it. 00:29:11.129 [2024-06-10 10:54:35.341213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.129 [2024-06-10 10:54:35.341222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.341604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.341614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.342010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.342020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.342382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.342393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.342712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.342722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.343082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.343091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.343561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.343570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.343909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.343918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.344150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.344159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.344514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.344524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.344861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.344871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.345161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.345171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.345400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.345409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.345777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.345787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.346173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.346183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.346549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.346559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.346816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.346826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.347175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.347185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.347539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.347549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.347905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.347915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.348260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.348271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.348655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.348664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.349001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.349010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.349254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.349264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.349642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.349651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.350003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.350012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.350363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.350373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.350596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.350605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.350971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.350989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.351347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.351356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.351697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.351707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.351924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.351933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.352262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.352271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.352630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.352639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.353032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.353042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.130 qpair failed and we were unable to recover it. 00:29:11.130 [2024-06-10 10:54:35.353406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.130 [2024-06-10 10:54:35.353416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.353756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.353764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.354153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.354162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.354521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.354531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.354889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.354899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.355317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.355327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.355656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.355665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.356028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.356038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.356453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.356463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.356826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.356836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.357075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.357084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.357438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.357448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.357800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.357809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.358163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.358172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.358445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.358454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.358815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.358825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.359257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.359268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.359621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.359631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.359992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.360001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.360342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.360351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.360710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.360719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.361132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.361140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.361504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.361514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.361859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.361868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.362233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.362247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.362599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.362608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.362992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.363001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.363345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.363355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.363778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.363787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.364105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.364115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.364459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.364468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.364804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.364815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.365170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.365181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.365525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.365534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.365898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.365908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.366317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.366327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.366538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.366549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.366912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.366921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.131 qpair failed and we were unable to recover it. 00:29:11.131 [2024-06-10 10:54:35.367250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.131 [2024-06-10 10:54:35.367260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.367600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.367610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.367966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.367976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.368311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.368321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.368700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.368710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.369128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.369137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.369483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.369494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.369805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.369815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.370189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.370199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.370558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.370568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.370898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.370908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.371266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.371277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.371647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.371658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.371851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.371861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.372106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.372115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.372467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.372476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.372832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.372841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.373154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.373163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.373563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.373573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.373906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.373915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.374292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.374301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.374728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.374739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.375071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.375080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.375406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.375415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.375762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.375771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.376112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.376121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.376420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.376430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.376798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.376807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.377141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.377150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.377485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.377495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.377862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.377872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.378230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.378241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.378502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.378512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.378894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.378903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.379204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.379213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.379583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.379593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.379964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.379974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.132 qpair failed and we were unable to recover it. 00:29:11.132 [2024-06-10 10:54:35.380338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.132 [2024-06-10 10:54:35.380347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.133 qpair failed and we were unable to recover it. 00:29:11.133 [2024-06-10 10:54:35.380777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.133 [2024-06-10 10:54:35.380786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.133 qpair failed and we were unable to recover it. 00:29:11.133 [2024-06-10 10:54:35.381154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.133 [2024-06-10 10:54:35.381164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.133 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.381518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.381528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.381911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.381921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.382287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.382297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.382662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.382671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.383011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.383019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.383422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.383432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.383790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.383799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.383994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.384004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.384390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.384403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.384791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.384800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.385155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.385165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.385518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.385528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.385863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.385872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.386213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.386222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.386573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.386583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.386920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.386930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.387382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.387392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.387741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.387750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.388043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.388053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.388338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.388349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.388718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.388728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.389067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.389077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.389343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.389352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.389667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.389677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.390037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.390046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.390312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.390321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.390558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.390569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.390947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.390957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.391375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.391385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.391725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.391734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.392005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.392014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.392356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.407 [2024-06-10 10:54:35.392365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.407 qpair failed and we were unable to recover it. 00:29:11.407 [2024-06-10 10:54:35.392696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.392706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.392965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.392974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.393290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.393299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.393637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.393647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.393978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.393987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.394346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.394356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.394719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.394728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.395054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.395063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.395419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.395429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.395778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.395788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.396147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.396157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.396509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.396519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.396846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.396855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.397209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.397218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.397598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.397609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.397952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.397962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.398319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.398329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.398683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.398694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.398939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.398948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.399301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.399311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.399595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.399604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.399961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.399970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.400297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.400307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.400458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.400467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.400704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.400714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.401052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.401061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.401427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.401437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.401830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.401839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.402218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.402229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.402489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.402498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.402821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.402832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.403136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.403146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.403561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.403570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.403758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.408 [2024-06-10 10:54:35.403767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.408 qpair failed and we were unable to recover it. 00:29:11.408 [2024-06-10 10:54:35.404136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.404146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.404484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.404494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.404747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.404756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.405141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.405151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.405504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.405514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.405855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.405865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.406217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.406228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.406589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.406599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.406825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.406836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.407237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.407251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.407609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.407621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.407874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.407884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.408249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.408258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.408603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.408613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.408920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.408929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.409296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.409306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.409653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.409662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.410067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.410076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.410456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.410466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.410849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.410859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.411257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.411266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.411601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.411611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.411857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.411866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.412213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.412222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.412635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.412645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.413009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.413018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.413360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.413370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.413727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.413736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.414092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.414101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.414374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.414383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.414724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.414734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.415118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.415128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.415481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.415490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.415744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.415753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.416107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.409 [2024-06-10 10:54:35.416116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.409 qpair failed and we were unable to recover it. 00:29:11.409 [2024-06-10 10:54:35.416465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.416475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.416850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.416860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.417238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.417254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.417593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.417602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.417941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.417951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.418307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.418317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.418667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.418677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.419047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.419056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.419325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.419335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.419690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.419699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.420032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.420041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.420204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.420215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.420551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.420561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.420789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.420798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.421163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.421173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.421434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.421444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.421827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.421838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.422196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.422205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.422584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.422594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.422960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.422970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.423323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.423333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.423690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.423699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.424055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.424064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.424255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.424265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.424617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.424626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.424960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.424969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.425423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.425433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.425801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.425815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.426069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.426078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.426250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.426260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.426623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.426633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.426989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.426998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.427375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.427385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.427761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.427771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.410 [2024-06-10 10:54:35.428133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.410 [2024-06-10 10:54:35.428142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.410 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.428515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.428524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.428862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.428871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.429251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.429262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.429598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.429607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.429946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.429955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.430298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.430307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.430614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.430623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.430988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.430997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.431355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.431365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.431707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.431716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.431925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.431936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.432377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.432386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.432716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.432726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.433069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.433077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.433434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.433443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.433771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.433781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.434131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.434141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.434505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.434515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.434893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.434903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.435263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.435274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.435654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.435663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.436001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.436010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.436334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.436344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.436701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.436711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.437120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.437129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.437479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.437489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.437843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.437861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.438297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.438306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.438683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.438692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.439080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.439090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.439448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.439459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.439815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.439825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.440138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.440147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.411 [2024-06-10 10:54:35.440558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.411 [2024-06-10 10:54:35.440568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.411 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.440893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.440902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.441265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.441276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.441645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.441655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.442029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.442039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.442402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.442411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.442746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.442762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.443117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.443126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.443463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.443473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.443716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.443725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.444107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.444117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.444480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.444490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.444863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.444872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.445205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.445214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.445583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.445592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.445958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.445967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.446309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.446318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.446681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.446690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.446959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.446968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.447325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.447335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.447679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.447688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.448053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.448063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.448419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.448429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.448783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.448793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.449175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.449185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.449543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.449553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.449922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.449932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.450294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.412 [2024-06-10 10:54:35.450303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.412 qpair failed and we were unable to recover it. 00:29:11.412 [2024-06-10 10:54:35.450646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.450656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.451056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.451070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.451434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.451444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.451798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.451807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.452187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.452196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.452541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.452551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.452915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.452925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.453281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.453291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.453694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.453703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.454066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.454075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.454437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.454446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.454802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.454811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.455170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.455179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.455381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.455392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.455768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.455778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.456133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.456142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.456495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.456504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.456899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.456908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.457253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.457264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.457622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.457631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.457965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.457974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.458356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.458366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.458747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.458756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.459134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.459144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.459591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.459601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.459936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.459945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.460305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.460315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.460675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.460684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.460960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.460972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.461330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.461339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.461551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.461561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.461926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.461935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.462275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.462284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.462663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.462672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.463029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.463038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.463390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.463400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.463734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.413 [2024-06-10 10:54:35.463743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.413 qpair failed and we were unable to recover it. 00:29:11.413 [2024-06-10 10:54:35.464106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.464116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.464499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.464508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.464765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.464774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.465148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.465157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.465506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.465515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.465751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.465762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.466036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.466045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.466425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.466434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.466808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.466817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.467182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.467191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.467548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.467557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.467894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.467903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.468279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.468289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.468667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.468675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.468950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.468960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.469346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.469356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.469696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.469705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.470096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.470106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.470550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.470560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.470896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.470905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.471203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.471212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.471581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.471591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.471744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.471753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.472081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.472091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.472455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.472464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.472786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.472796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.473209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.473218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.473573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.473583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.473942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.473952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.474329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.474340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.474595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.474604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.474978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.474987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.475322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.475333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.475712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.475721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.476055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.476064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.476413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.476422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.414 [2024-06-10 10:54:35.476787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.414 [2024-06-10 10:54:35.476796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.414 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.477136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.477145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.477491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.477500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.477880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.477889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.478172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.478182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.478546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.478556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.478891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.478900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.479134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.479143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.479546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.479556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.479965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.479975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.480324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.480333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.480591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.480601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.480941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.480950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.481289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.481299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.481521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.481531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.481837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.481847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.482204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.482213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.482461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.482471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.482836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.482846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.483184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.483193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.483639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.483649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.483984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.483993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.484450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.484459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.484854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.484866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.485233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.485248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.485594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.485603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.485935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.485944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.486196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.486206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.486423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.486433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.486792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.486802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.487132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.487141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.487504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.487513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.487844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.415 [2024-06-10 10:54:35.487854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.415 qpair failed and we were unable to recover it. 00:29:11.415 [2024-06-10 10:54:35.488237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.488250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.488617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.488626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.488960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.488969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.489346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.489356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.489736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.489746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.489994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.490003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.490380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.490390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.490614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.490623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.490853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.490862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.491218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.491226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.491452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.491463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.491822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.491840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.492197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.492206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.492544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.492555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.492939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.492949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.493304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.493313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.493664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.493674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.493931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.493942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.494319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.494329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.494673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.494683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.495042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.495051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.495272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.495285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.495642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.495653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.496041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.496051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.496402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.496411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.496644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.496652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.496988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.496996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.497330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.497339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.497675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.497684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.498044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.498053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.498392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.498402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.498775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.498784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.499140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.499150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.499522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.499532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.499908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.499916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.500170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.500180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.416 qpair failed and we were unable to recover it. 00:29:11.416 [2024-06-10 10:54:35.500543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.416 [2024-06-10 10:54:35.500553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.500886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.500895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.501234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.501246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.501459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.501469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.501833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.501842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.502200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.502209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.502559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.502568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.502931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.502941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.503296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.503306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.503682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.503691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.504052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.504070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.504428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.504438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.504784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.504793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.505150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.505159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.505466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.505475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.505833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.505841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.506179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.506188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.506560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.506570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.506949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.506958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.507231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.507249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.507464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.507474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.507846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.507863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.508274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.508284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.508760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.508770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.509101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.509109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.509464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.509474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.509850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.509859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.510199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.510209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.510544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.510554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.510918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.510926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-06-10 10:54:35.511262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-06-10 10:54:35.511271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.511651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.511661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.511841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.511851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.512221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.512232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.512457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.512466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.512843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.512852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.513214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.513223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.513578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.513588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.513968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.513978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.514314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.514323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.514672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.514681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.515036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.515045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.515418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.515428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.515760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.515769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.516161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.516170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.516520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.516530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.516771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.516781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.517141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.517150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.517402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.517412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.517768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.517779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.518116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.518124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.518465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.518474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.518829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.518839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.519175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.519183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.519522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.519532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.519888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.519897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.520273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.520282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.520654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.520664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.521016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.521026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.521298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.521308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.521690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.521699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.522038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.522047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.522388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.522397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.522775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.522784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.523121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.523129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.523467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-06-10 10:54:35.523476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-06-10 10:54:35.523848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.523857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.524195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.524203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.524496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.524506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.524763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.524772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.525137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.525146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.525522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.525532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.525918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.525927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.526272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.526282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.526717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.526726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.526974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.526982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.527358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.527370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.527723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.527732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.527958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.527969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.528311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.528321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.528686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.528696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.529047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.529057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.529394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.529403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.529681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.529690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.530044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.530052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.530418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.530428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.530799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.530808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.531091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.531101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.531462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.531471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.531810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.531820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.532215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.532225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.532583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.532593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.532927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.532936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.533299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.533308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.533583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.533593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.533925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.533934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.534254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.534264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.534627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.534636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.534973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.534982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.535348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.535357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-06-10 10:54:35.535701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-06-10 10:54:35.535711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.536100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.536110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.536486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.536496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.536712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.536722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.536988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.536997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.537340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.537350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.537712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.537722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.538083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.538092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.538428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.538437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.538782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.538791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.539157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.539166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.539511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.539520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.539708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.539718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.540053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.540061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.540444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.540454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.540794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.540804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.541164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.541174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.541443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.541453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.541788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.541797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.542163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.542173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.542534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.542544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.542901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.542911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.543168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.543178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.543545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.543555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.543901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.543910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.544113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.544123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.544485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.544495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.544840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.544849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.545200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.545210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.545571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.545581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.545922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.545930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.546252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.546262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.546739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.546748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.547126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.547135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-06-10 10:54:35.547476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-06-10 10:54:35.547485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.547839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.547848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.548205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.548214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.548576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.548585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.548960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.548970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.549328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.549337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.549673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.549682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.550043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.550053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.550286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.550295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.550629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.550638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.550998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.551010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.551350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.551361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.551730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.551739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.552084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.552094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.552417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.552426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.552878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.552888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.553235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.553257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.553590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.553599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.553962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.553971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.554183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.554194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.554530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.554539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.554879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.554889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.555229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.555238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.555508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.555517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.555773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.555782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.556137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.556146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.556510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.556520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.556874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.556883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.557218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.557227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.557587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.557605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.557980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.557989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.558323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.558332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.558712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.558720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.558985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.558994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-06-10 10:54:35.559371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-06-10 10:54:35.559381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.559727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.559737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.560085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.560094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.560362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.560373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.560714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.560723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.561075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.561085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.561447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.561457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.561704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.561713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.562074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.562083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.562550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.562559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.562901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.562911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.563271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.563281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.563515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.563524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.563883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.563892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.564234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.564251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.564634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.564644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.564972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.564981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.565363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.565372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.565786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.565795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.566143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.566153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.566561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.566570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.566926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.566936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.567326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.567335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.567681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.567691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.568050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.568059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.568422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.568432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.568817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.568827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.569186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.569194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.569548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.569557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.569912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.569922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-06-10 10:54:35.570284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-06-10 10:54:35.570296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.570691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.570700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.570896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.570906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.571272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.571282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.571673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.571682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.572058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.572068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.572421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.572430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.572808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.572818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.573137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.573146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.573507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.573516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.573917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.573927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.574278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.574288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.574655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.574665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.575022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.575031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.575361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.575372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.575754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.575764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.576100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.576110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.576512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.576523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.576857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.576867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.577186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.577195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.577531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.577540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.577658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.577668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.578041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.578050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.578350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.578359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.578715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.578725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.579135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.579144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.579472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.579482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.579842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.579851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.580187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.580196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.580688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.580699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.581112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.581122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.581320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.581331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.581699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.581709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.582089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.582098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.582434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.582443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-06-10 10:54:35.582782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-06-10 10:54:35.582791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.583140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.583151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.583509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.583519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.583851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.583860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.584219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.584228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.584586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.584596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.585010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.585020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.585383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.585394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.585773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.585782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.586028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.586036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.586416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.586426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.586781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.586791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.587096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.587106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.587466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.587476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.587751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.587760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.588086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.588095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.588325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.588334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.588593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.588602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.588963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.588973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.589276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.589286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.589646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.589655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.590041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.590050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.590392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.590401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.590780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.590789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.591164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.591173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.591463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.591478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.591840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.591849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.592189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.592198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.592481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.592491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.592859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.592868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.593208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.593216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.593440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.593449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.593696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.593706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.594064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.594078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.594431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.594440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.594742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.594752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-06-10 10:54:35.595115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-06-10 10:54:35.595124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.595351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.595362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.595495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.595504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.595878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.595888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.596112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.596122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.596471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.596480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.596838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.596847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.597214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.597224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.597577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.597587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.597925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.597934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.598285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.598295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.598490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.598501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.598832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.598841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.599180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.599189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.599582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.599592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.599824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.599833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.600204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.600214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.600577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.600586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.600928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.600937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.601154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.601163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.601598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.601608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.601940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.601950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.602154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.602165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.602511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.602521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.602872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.602884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.603077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.603086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.603481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.603491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.603843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.603852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.604204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.604214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.604547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.604557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.604966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.604975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.605311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.605320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.605772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.605781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.606125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.606135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.606461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.606471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.606807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.606817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.607064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.607072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.607465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-06-10 10:54:35.607474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-06-10 10:54:35.607863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.607872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.608237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.608251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.608599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.608609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.609019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.609028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.609219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.609229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.609417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.609427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.609794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.609802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.610181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.610190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.610510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.610520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.610895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.610905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.611246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.611256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.611631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.611641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.611999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.612007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.612473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.612509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.612897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.612909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.613257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.613268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.613620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.613629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.614007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.614016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.614396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.614405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.614735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.614745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.615107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.615116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.615503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.615512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.615893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.615903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.616260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.616269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.616606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.616616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.616968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.616977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.617313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.617323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.617687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.617698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.618007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.618017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.618401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.618411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.618774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.618783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.619123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.619132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.619475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.619484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.619834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.619843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.620199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.620208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-06-10 10:54:35.620473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-06-10 10:54:35.620482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.620711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.620719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.621066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.621077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.621439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.621449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.621786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.621795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.622147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.622162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.622531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.622541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.622887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.622898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.623255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.623265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.623643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.623652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.623991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.624000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.624283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.624293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.624625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.624634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.625001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.625011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.625400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.625410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.625712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.625721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.626091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.626100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.626463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.626473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.626854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.626864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.627216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.627228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.627588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.627599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.627955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.627965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.628342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.628352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.628700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.628710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.629083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.629093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.629253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.629265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.629665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.629676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.630038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.630048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-06-10 10:54:35.630369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-06-10 10:54:35.630380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.630744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.630754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.631137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.631147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.631397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.631407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.631821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.631830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.632167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.632176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.632532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.632543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.632901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.632910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.633318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.633330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.633603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.633613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.633985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.633995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.634320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.634331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.634550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.634560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.634893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.634902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.635291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.635301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.635612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.635622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.635960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.635969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.636325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.636336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.636703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.636715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.637047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.637056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.637409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.637419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.637775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.637785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.638035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.638045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.638402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.638413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.638652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.638662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.639074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.639083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.639423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.639432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.639797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.639807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.640169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.640179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.640593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.640603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.641038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.641047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.641475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.641485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.641789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.641799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.642047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.642057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-06-10 10:54:35.642391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-06-10 10:54:35.642402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.642711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.642721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.643086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.643095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.643433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.643443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.643832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.643841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.644179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.644189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.644497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.644507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.644765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.644774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.645125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.645134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.645470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.645480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.645863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.645872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.646189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.646209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.646573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.646583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.646994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.647004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.647185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.647195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.647444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.647454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.647807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.647817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.648189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.648199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.648555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.648565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.648925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.648936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.649317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.649326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.649670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.649679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.650065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.650074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.650419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.650429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.650788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.650797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.651153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.651163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.651393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.651403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.651772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.651781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.651977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.651988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.652240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.652253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.652620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.652629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.652967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.652976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.653334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.653344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.653528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-06-10 10:54:35.653538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-06-10 10:54:35.653784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.653793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.654075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.654085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.654453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.654463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.654738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.654747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.655137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.655146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.655493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.655504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.655866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.655875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.656252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.656261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.656595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.656604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.656973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.656982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.657368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.657377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.657712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.657721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.658100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.658110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.658478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.658488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.658830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.658839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.659206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.659215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.659593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.659602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.659815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.659825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Write completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Write completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Write completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Write completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Write completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Read completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 Write completed with error (sct=0, sc=8) 00:29:11.430 starting I/O failed 00:29:11.430 [2024-06-10 10:54:35.660033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.430 [2024-06-10 10:54:35.660424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.660436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.660817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.660824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.661201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.661209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.661534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.661542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.661915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.661921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.662258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.662266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.662509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.662518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-06-10 10:54:35.662883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-06-10 10:54:35.662889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.663095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.663102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.663382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.663388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.663746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.663754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.664133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.664141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.664521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.664528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.664782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.664789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.665142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.665149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.665336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.665344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.665678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.665685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.666040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.666047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.666408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.666415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.666772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.666779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.667165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.667171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.667531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.667539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.667894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.667900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.668270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.668277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.668600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.668607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.668962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.668969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.669309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.669316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.669653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.669660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.669979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.669985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.670320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.670327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.670526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.670534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.670857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.670864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.671102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.671108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.671471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.671480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.671801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.671808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.672188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.672196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.672545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.672553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.672907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.672914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.673173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.673180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.673363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.673371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.673755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.673762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.674118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.674125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.674432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.674438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-06-10 10:54:35.674778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-06-10 10:54:35.674784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.675146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.675153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.675518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.675525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.675859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.675865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.676205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.676211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.676553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.676560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.676910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.676925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.677372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.677379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.677539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.677546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.677871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.677877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.678222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.678228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.678584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.678591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.678779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.678786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.679098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.679105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.679458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.679465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.679832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.679838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.680209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.680215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.680558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.680565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.680922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.680928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-06-10 10:54:35.681264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-06-10 10:54:35.681272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.705 [2024-06-10 10:54:35.681596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-06-10 10:54:35.681604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.681963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.681971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.682228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.682235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.682603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.682611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.682987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.682995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.683369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.683376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.683732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.683739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.684091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.684098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.684312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.684319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.684677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.684683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.685017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.685025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.685280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.685287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.685689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.685696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.686030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.686037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.686360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.686367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.686725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.686732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.687067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.687073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.687320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.687327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.687512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.687519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.687783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.687791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.688038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.688045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.688387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.688395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.688823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.688830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.689170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.689176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.689457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.689464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.689830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.689838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.690214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.690221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.690581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.690587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.690860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.690867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.691219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.691226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.691462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.691469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.691815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.691823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.692200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.692207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.692566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.692574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.692918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.692925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.693277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-06-10 10:54:35.693283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-06-10 10:54:35.693646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.693652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.694026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.694032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.694386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.694394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.694747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.694754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.695132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.695138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.695339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.695346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.695594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.695600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.695965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.695971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.696271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.696278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.696604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.696610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.696945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.696952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.697313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.697320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.697623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.697630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.697964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.697970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.698215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.698225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.698606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.698614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.698983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.698990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.699327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.699333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.699595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.699601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.699946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.699953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.700339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.700346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.700719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.700732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.701151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.701157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.701392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.701405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.701760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.701767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.702103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.702109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.702331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.702337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.702705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.702713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.703071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.703078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.703421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.703428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.703757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.703763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.703983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.703990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.704350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.704357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.704672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.704679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.705037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-06-10 10:54:35.705044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-06-10 10:54:35.705408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.705415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.705777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.705785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.706145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.706151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.706519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.706526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.706887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.706894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.707279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.707286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.707644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.707650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.708017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.708025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.708288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.708295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.708475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.708482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.708834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.708841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.709203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.709218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.709608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.709616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.709985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.709991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.710211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.710218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.710579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.710586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.710919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.710926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.711287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.711294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.711652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.711660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.711991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.711999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.712353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.712359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.712738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.712745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.713095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.713101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.713436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.713442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.713805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.713812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.714146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.714153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.714482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.714489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.714878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.714884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.715217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.715223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.715568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.715576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.715948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.715955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.716312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.716318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.716544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.716550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.716924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.716930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.717274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.717281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.717458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.717465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.717806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.717813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.718175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-06-10 10:54:35.718182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-06-10 10:54:35.718542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.718549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.718882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.718889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.719150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.719156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.719416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.719423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.719778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.719784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.720038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.720045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.720365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.720372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.720712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.720718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.721076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.721091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.721445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.721452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.721785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.721791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.722029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.722035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.722385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.722392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.722765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.722772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.723125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.723132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.723502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.723509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.723845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.723851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.724211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.724227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.724582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.724589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.724966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.724972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.725315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.725322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.725703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.725713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.726045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.726052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.726414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.726421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.726787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.726794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.727115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.727129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.727510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.727517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.727851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.727857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.728213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.728220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.728550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.728557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.728891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.728898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.729304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.729311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-06-10 10:54:35.729488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-06-10 10:54:35.729495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.729864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.729870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.730211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.730217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.730561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.730568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.730941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.730948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.731284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.731292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.731342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.731349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.731681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.731688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.732022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.732028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.732380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.732387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.732732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.732739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.733007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.733014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.733349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.733355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.733727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.733733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.733986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.733993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.734349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.734356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.734700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.734707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.735049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.735056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.735293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.735300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.735637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.735644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.735995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.736002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.736356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.736362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.736697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.736703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.737057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.737063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.737411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.737418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.737778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.737785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.738118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.738125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-06-10 10:54:35.738466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-06-10 10:54:35.738473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.738881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.738887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.739222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.739228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.739579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.739586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.739942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.739948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.740201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.740207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.740404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.740411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.740740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.740746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.741084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.741091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.741454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.741461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.741853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.741860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.742030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.742036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.742402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.742408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.742761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.742767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.743126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.743132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.743508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.743516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.743727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.743734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.744119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.744125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.744471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.744477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.744733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.744739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.745115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.745122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.745503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.745510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.745862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.745870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.746230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.746238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.746598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.746605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.746945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.746952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.747318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.747325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.747690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.747697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.747965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.747971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.748311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.748323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.748676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.748682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.749023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.749029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.749395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.749402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.749761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.749767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.750156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.750163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.750409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.750416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.750778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-06-10 10:54:35.750784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-06-10 10:54:35.751132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.751140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.751518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.751525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.751865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.751871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.752229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.752237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.752508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.752515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.752770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.752776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.753042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.753049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.753397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.753404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.753733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.753739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.754102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.754109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.754451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.754458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.754690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.754698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.755127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.755134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.755510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.755518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.755896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.755903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.756261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.756270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.756587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.756594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.756927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.756933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.757323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.757331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.757684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.757691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.758029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.758036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.758234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.758241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.758326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.758333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.758649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.758657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.759013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.759020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.759358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.759364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.759707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.759714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.760104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.760111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.760464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.760471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.760806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.760813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.761168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.761175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.761532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.761539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.761874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-06-10 10:54:35.761882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-06-10 10:54:35.762222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.762229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.762408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.762415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.762781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.762787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.763126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.763133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.763503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.763509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.763884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.763891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.764247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.764254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.764612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.764618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.764927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.764933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.765293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.765299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.765639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.765645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.765831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.765838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.766168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.766174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.766532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.766539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.766871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.766877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.767234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.767251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.767556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.767562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.767896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.767903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.768229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.768236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.768600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.768607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.768947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.768953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.769310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.769317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.769654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.769660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.770014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.770020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.770211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.770218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.770592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.770599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.770954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.770961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.771303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.771310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.771691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.771698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.771887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.771895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.772174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.772182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.772539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.772546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.772924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.772931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.773285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.773292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.773652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.773658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.774009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.774016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-06-10 10:54:35.774356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-06-10 10:54:35.774362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.774729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.774744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.775116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.775122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.775465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.775474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.775693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.775699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.776080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.776087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.776442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.776448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.776781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.776788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.777141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.777148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.777554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.777561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.777902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.777908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.778308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.778315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.778703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.778710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.779085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.779091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.779351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.779357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.779723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.779729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.780065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.780073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.780429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.780436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.780770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.780776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.781099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.781105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.781492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.781498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.781749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.781755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.782008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.782015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.782400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.782407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.782659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.782665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.782999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.783005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.783356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.783362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.783742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.783748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.783983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.783990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.784349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.784355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.784767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.784773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.785114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.785120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.785464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.785471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.785833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-06-10 10:54:35.785839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-06-10 10:54:35.786050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.786057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.786393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.786399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.786649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.786655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.787009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.787016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.787355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.787362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.787734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.787741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.788116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.788122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.788465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.788471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.788839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.788846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.789211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.789219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.789595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.789602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.789936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.789942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.790302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.790309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.790682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.790688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.791026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.791032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.791388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.791394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.791639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.791645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.791995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.792002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.792372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.792379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.792719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.792726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.792966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.792972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.793314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.793321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.793629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.793636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.794062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.794068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.794399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.794407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.794764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.794770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.795142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.795149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.795522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.795529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.795886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.795893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.796269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.796276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.796497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-06-10 10:54:35.796503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-06-10 10:54:35.796867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.796881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.797235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.797244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.797650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.797657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.798012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.798018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.798368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.798375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.798743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.798750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.799086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.799092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.799440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.799447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.799807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.799813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.800047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.800054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.800409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.800416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.800754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.800761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.801115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.801121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.801416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.801422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.801778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.801784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.802158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.802164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.802493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.802500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.802905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.802911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.803199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.803208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.803535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.803541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.803878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.803884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.804246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.804253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.804605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.804611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.804988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.804995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.805466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.805494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.805849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.805858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.806196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.806203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.806580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.806587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.806923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.806929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.807300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.807307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.807664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.807670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.808007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.808014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.808372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.808379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.808744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.808750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.809091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.809098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.809511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.809519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.809864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.809871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-06-10 10:54:35.810205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-06-10 10:54:35.810211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.810452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.810459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.810831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.810837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.811176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.811182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.811424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.811431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.811676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.811683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.812059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.812065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.812450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.812456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.812874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.812881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.813239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.813250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.813628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.813634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.813947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.813954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.814216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.814222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.814435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.814441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.814789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.814795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.815021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.815028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.815329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.815335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.815693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.815701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.816078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.816084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.816428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.816435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.816792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.816798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.817034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.817042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.817428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.817435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.817791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.817797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.818068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.818074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.818432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.818439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.818781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.818787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.819170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.819176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.819546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.819553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.819773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.819780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.820121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.820128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.820514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.820520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.820873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.820880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.821138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.821144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.821479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.821485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-06-10 10:54:35.821733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-06-10 10:54:35.821740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.822107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.822113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.822540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.822547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.822923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.822930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.823326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.823332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.823731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.823738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.824048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.824054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.824216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.824224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.824563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.824570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.824777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.824783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.825148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.825154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.825479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.825486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.825845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.825851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.826196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.826203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.826561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.826574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.826930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.826937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.827205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.827212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.827414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.827422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.827650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.827657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.828024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.828031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.828388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.828395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.828766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.828773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.829031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.829037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.829423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.829430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.829660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.829666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.830061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.830068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.830423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.830432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.830854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.830860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.831236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.831250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.831493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.831499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.831853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.831860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.832236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.832246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.832628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.832634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.832977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.832984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-06-10 10:54:35.833335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-06-10 10:54:35.833341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.833713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.833720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.834076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.834082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.834419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.834425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.834647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.834654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.834916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.834924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.835309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.835316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.835595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.835601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.835693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.835701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.836026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.836032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.836218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.836225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.836564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.836570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.836943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.836950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.837302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.837308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.837701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.837707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.837955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.837961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.838315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.838322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.838559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.838565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.838956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.838962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.839346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.839353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.839567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.839573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.839968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.839974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.840402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.840408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.840780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.840787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.841141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.841148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.841504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.841510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.841857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.841864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.842222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.842229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.842634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.842641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.842908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.842915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.843294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.843300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.843636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.843643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.843862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.843870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.844127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.844133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.844507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.844514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.844851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.844857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.845213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.845219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-06-10 10:54:35.845576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-06-10 10:54:35.845583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.845924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.845932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.846297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.846303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.846665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.846672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.846987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.846993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.847362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.847369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.847735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.847741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.848013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.848020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.848379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.848385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.848762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.848768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.849131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.849137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.849511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.849518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.849857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.849863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.850248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.850255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.850588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.850594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.850809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.850815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.851036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.851042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.851402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.851409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.851782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.851788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.852122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.852129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.852484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.852491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.852835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.852842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.853205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.853211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.853473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.853480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.853750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.853756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.854016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.854022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.854387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.854393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-06-10 10:54:35.854675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-06-10 10:54:35.854681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.855036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.855042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.855462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.855468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.855825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.855832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.856093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.856099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.856303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.856309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.856677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.856683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.857025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.857032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.857389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.857396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.857607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.857614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.857893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.857900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.858257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.858265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.858612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.858618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.858873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.858880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.859117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.859123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.859499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.859506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.859700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.859707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.860086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.860094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.860284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.860291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.860635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.860641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.860977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.860984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.861205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.861212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.861402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.861410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.861762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.861770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.861923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.861931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.862301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.862309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.862541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.862548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.862807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.862814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.863089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.863096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.863429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.863435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.863680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.863687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.863917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.863923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.864287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.864294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.864601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.864607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.864823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.864830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.865189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.865195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.865534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.865541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-06-10 10:54:35.865897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-06-10 10:54:35.865903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.866061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.866068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.866382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.866389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.866759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.866765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.867130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.867137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.867523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.867529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.867866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.867872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.868237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.868252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.868615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.868621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.868954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.868961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.869221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.869228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.869587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.869595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.869929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.869936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.870193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.870200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.870531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.870538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.870899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.870906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.871037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.871044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.871414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.871421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.871754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.871760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.872151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.872158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.872529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.872536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.872871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.872878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.873235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.873244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.873612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.873619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.873942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.873949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.874327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.874335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.874696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.874703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.875059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.875066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.875322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.875329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.875704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.875711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.875969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.875977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-06-10 10:54:35.876332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-06-10 10:54:35.876339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.876651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.876658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.876909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.876916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.877271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.877278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.877647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.877654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.878013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.878020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.878396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.878403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.878666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.878673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.879033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.879040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.879392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.879400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.879657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.879664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.879996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.880004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.880358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.880365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.880728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.880735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.881144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.881151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.881497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.881504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.881861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.881868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.882225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.882232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.882569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.882577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.882839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.882847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.883223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.883232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.883284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.883292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.883616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.883624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.884003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.884010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.884358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.884365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.884714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.884721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.885076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.885083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.885467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.885475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.885831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.885838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.886203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.886210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.886563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.886570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.886943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.886950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.887167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.887174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.887503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.887509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.887919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.887925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.888263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.888270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.888569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-06-10 10:54:35.888575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-06-10 10:54:35.888935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.888941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.889285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.889291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.889648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.889654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.889917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.889924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.890277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.890284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.890642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.890648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.890986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.890992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.891348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.891354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.891617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.891623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.891983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.891989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.892321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.892327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.892697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.892704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.893048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.893054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.893302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.893309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.893694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.893700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.894032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.894038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.894336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.894343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.894561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.894568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.894949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.894955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.895331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.895338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.895683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.895689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.896049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.896056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.896431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.896438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.896786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.896794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.897164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.897170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.897600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.897607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.897809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.897816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.898171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.898177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.898508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.898522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.898878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.898884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.899219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.899226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.899549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.899556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.899911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.899917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.900290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.900297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.900680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.900686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-06-10 10:54:35.901020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-06-10 10:54:35.901026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.901382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.901389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.901747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.901754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.902083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.902090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.902453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.902459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.902805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.902811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.903177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.903184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.903636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.903642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.903986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.903993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.904252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.904260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.904612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.904618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.904870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.904877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.905250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.905257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.905627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.905633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.905997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.906009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.906345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.906352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.906699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.906705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.907046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.907053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.907437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.907443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.907779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.907794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.908148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.908154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.908357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.908364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.908755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.908761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.909097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.909103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.909417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.909423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.909767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.909774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.910129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.910136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.910495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.910502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-06-10 10:54:35.910706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-06-10 10:54:35.910714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.910869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.910877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.911252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.911260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.911678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.911684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.912062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.912069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.912428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.912434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.912818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.912825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.913193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.913201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.913463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.913471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.913833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.913840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.914192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.914199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.914609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.914616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.914960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.914967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.915288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.915295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.915675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.915681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.916059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.916065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.916357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.916364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.916723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.916730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.917068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.917074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.917453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.917459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.917769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.917781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.918134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.918140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.918351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.918358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.918735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.918742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.919074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.919081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.919253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.919260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.919724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.919730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.920064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.920072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.920431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.920438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.920809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.920815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.921156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.921162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.921526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.921533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.921753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.921759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.922098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.922105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.922459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-06-10 10:54:35.922466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-06-10 10:54:35.922804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.922810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.923146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.923152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.923319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.923327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.923686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.923692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.924036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.924042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.924285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.924292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.924475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.924482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.924812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.924818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.925196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.925203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.925554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.925570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.925923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.925930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.926116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.926123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.926458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.926465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.926819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.926825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.927184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.927190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.927365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.927372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.927652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.927665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.927925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.927931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.928264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.928271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.928631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.928637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.928974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.928981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.929292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.929298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.929665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.929671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.930006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.930012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.930386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.930392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.930728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.930734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.931088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.931101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.931364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.931370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.931528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.931535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.931872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.931878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.932303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.932309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.932633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.932640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.933015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.933023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.933373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.933380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.933755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.933763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.934014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.934021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.934396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.727 [2024-06-10 10:54:35.934404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.727 qpair failed and we were unable to recover it. 00:29:11.727 [2024-06-10 10:54:35.934749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.934755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.935129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.935135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.935492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.935499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.935720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.935727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.936035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.936041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.936396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.936403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.936580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.936587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.937031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.937037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.937366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.937373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.937734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.937741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.938079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.938085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.938432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.938439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.938802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.938808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.939057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.939063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.939404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.939411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.939787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.939794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.940149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.940155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.940408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.940415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.940791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.940797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.941126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.941133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.941506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.941512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.941886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.941894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.942253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.942260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.942615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.942621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.942961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.942967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.943310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.943316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.943562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.943569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.943921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.943928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.944246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.944252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.944606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.944613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.944971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.944977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.945312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.945318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.945714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.945720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.946076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.946083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.946423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.946431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.946791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.728 [2024-06-10 10:54:35.946798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.728 qpair failed and we were unable to recover it. 00:29:11.728 [2024-06-10 10:54:35.947054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.947060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.947419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.947425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.947763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.947769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.948148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.948155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.948506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.948513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.948758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.948765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.949113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.949120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.949496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.949503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.949838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.949845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.950205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.950211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.950569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.950575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.950915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.950921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.951290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.951297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.951661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.951668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.951847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.951855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.952213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.952220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.952554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.952560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.952907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.952914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.953266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.953272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.953608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.953614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.953976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.953989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.954325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.954331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.954670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.954676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.955034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.955040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.955394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.955400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.955736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.955743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.956095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.956101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.956441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.956448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.956825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.956832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.957097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.957103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.957292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.957299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.957605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.957612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.957950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.957957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.958277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.729 [2024-06-10 10:54:35.958284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.729 qpair failed and we were unable to recover it. 00:29:11.729 [2024-06-10 10:54:35.958517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.958524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.958714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.958721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.959092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.959099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.959456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.959462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.959795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.959802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.960177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.960185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.960449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.960456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.960813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.960819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.961152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.961158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.961516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.961531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.961961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.961968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.962111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.962119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.962499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.962506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.962840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.962846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.963081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.963088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.963441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.963448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.963864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.963870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.964207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.964214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.964560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.964566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.964905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.964911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.965269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.965276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.965655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.965661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.965993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.966000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.966336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.966343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.966678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.966685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.966867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.966874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.967235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.967241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.967577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.967583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.967827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.967833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.968175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.968181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.968553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.730 [2024-06-10 10:54:35.968560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.730 qpair failed and we were unable to recover it. 00:29:11.730 [2024-06-10 10:54:35.968913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.968920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.969147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.969154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.969553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.969560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.969916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.969923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.970357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.970364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.970700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.970706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.971056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.971062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.971416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.971422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.971627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.971634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.971994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.972000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.972420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.972427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.972783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.972789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.973143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.973149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.973515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.973523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.973887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.973896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.974250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.974257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.974608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.974614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.974947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.974953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.975205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.975212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.975570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.975576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.975788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.975794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.976161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.976167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.976523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.976529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.976832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.976838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.977198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.977204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.977545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.977551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.977782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.977789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.977991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.977998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.978327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.978334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.978688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.978695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.979082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.979089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.979441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.979448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.979781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.979788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:11.731 [2024-06-10 10:54:35.980149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.731 [2024-06-10 10:54:35.980155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:11.731 qpair failed and we were unable to recover it. 00:29:12.005 [2024-06-10 10:54:35.980514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-06-10 10:54:35.980522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-06-10 10:54:35.980862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-06-10 10:54:35.980869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-06-10 10:54:35.981248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-06-10 10:54:35.981255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-06-10 10:54:35.981563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-06-10 10:54:35.981570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-06-10 10:54:35.981911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-06-10 10:54:35.981917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-06-10 10:54:35.982292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-06-10 10:54:35.982299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-06-10 10:54:35.982662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-06-10 10:54:35.982668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-06-10 10:54:35.982997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-06-10 10:54:35.983004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-06-10 10:54:35.983269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-06-10 10:54:35.983275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.983579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.983586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.983942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.983948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.984326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.984333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.984674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.984680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.985038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.985044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.985380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.985386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.985743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.985749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.986083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.986089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.986275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.986282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.986566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.986572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.986919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.986925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.987114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.987123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.987339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.987347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.987611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.987618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.987959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.987965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.988204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.988210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.988572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.988578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.988942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.988949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.989306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.989313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.989656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.989662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.990004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.990010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.990369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.990376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.990711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.990717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.991068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.991075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.991432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.991439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.991783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.991789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.992148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.992161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.992433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.992440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.992777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.992784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.993132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.993139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.993498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.993506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.993882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.993889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.994155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.994161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.994376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.994384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.994753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-06-10 10:54:35.994759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-06-10 10:54:35.995093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.995101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.995446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.995453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.995794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.995801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.996236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.996245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.996596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.996603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.996933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.996940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.997232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.997238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.997589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.997596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.997984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.997991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.998338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.998345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.998718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.998724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.999099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.999105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.999450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.999456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:35.999857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:35.999863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.000200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.000206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.000442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.000448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.000831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.000838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.001196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.001211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.001468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.001474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.001880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.001887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.002269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.002276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.002514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.002521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.002861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.002867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.003212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.003218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.003597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.003604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.003939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.003945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.004315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.004322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.004724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.004730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.005099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.005105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.005449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.005456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.005812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.005819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.006152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.006158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.006502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.006509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.006866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.006873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.007087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.007093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.007462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-06-10 10:54:36.007469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-06-10 10:54:36.007844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.007850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.008184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.008190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.008522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.008529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.008782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.008789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.009127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.009133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.009510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.009517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.009848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.009855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.010209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.010216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.010480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.010486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.010823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.010830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.011048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.011055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.011412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.011418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.011784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.011791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.012103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.012109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.012476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.012483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.012819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.012825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.013062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.013068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.013438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.013444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.013783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.013789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.014145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.014151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.014328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.014338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.014600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.014606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.014974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.014980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.015236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.015245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.015591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.015598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.015932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.015938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-06-10 10:54:36.016301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-06-10 10:54:36.016308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.016590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.016597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.016972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.016978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.017410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.017418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.017770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.017776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.018113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.018119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.018496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.018502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.018841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.018847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.019207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.019213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.019601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.019608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.019941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.019948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.020246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.020252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.020461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.020469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.020821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.020827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.021161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.021167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.021501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.021508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.021868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.021874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.022210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.022216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.022578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.022591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.022963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.022969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.023302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.023309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.023672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.023678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.024011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.024018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.024355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.024362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.024575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.024582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.024950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.024957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.025333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.025339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.025703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.025709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.026068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.026074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.026282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.026289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.026682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.026688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.027025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.027038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.027420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.027426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.027759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.027765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.028156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.028165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-06-10 10:54:36.028503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-06-10 10:54:36.028509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.028699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.028707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.029077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.029083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.029417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.029424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.029767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.029773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.030112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.030118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.030466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.030473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.030832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.030838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.031099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.031106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.031471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.031477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.031811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.031818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.032220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.032226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.032517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.032524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.032717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.032724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.033042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.033048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.033404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.033411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.033746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.033752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.034114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.034120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.034490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.034497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.034830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.034837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.035083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.035089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.035314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.035321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.035591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.035597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.035926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.035932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.036288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.036295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.036551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.036557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.036941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.036948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.037280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.037288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.037651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.037657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.037912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.037918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.038178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.038185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.038536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.038542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.038896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.038903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.039160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.039166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.039539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.039545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.039902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.039908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.040249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-06-10 10:54:36.040255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-06-10 10:54:36.040498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.040504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.040849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.040855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.041189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.041197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.041550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.041556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.041893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.041899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.042294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.042301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.042673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.042680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.043015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.043022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.043374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.043381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.043711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.043717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.044081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.044088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.044341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.044347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.044689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.044696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.045060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.045067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.045463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.045470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.045806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.045812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.046168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.046175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.046545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.046551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.046896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.046902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.047258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.047265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.047569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.047575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.047834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.047840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.048273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.048279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.048620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.048627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.048988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.048994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.049350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.049357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.049697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.049704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.050051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.050058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.050416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.050423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.050800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.050806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.051060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.051067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.051422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.051429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.051787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.051793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-06-10 10:54:36.052158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-06-10 10:54:36.052165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.052519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.052525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.052885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.052893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.053284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.053291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.053642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.053648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.053972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.053979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.054230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.054237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.054618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.054625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.054992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.055000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.055362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.055370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.055725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.055733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.056082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.056089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.056440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.056447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.056693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.056699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.057007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.057019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.057273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.057280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.057677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.057684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.058028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.058035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.058415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.058422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.058718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.058724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.058975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.058981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.059329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.059336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.059709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.059715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.060051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.060058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.060421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.060428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.060776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.060782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.061133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.061139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.061395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.061401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.061765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.061772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.062109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.062116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.062477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.062483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.062845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.062851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.063179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.063186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.063449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.063456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.063797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.063804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.064162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.064169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.064528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.064535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.064873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.064880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-06-10 10:54:36.065257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-06-10 10:54:36.065264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.065617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.065624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.065985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.065992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.066349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.066355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.066701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.066707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.066945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.066952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.067350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.067356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.067708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.067714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.067882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.067889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.068254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.068261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.068601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.068608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.068693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.068702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.069071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.069078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.069318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.069325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.069706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.069712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.070053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.070060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.070412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.070418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.070756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.070762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.071127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.071134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.071508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.071515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.071865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.071872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.072249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.072256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.072584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.072597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.072957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.072964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.073318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.073325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.073683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.073689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.074020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.074036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.074250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.074257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.074648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.074654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-06-10 10:54:36.075030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-06-10 10:54:36.075037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.075281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.075288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.075622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.075629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.075985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.075992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.076334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.076341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.076671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.076677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.077017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.077023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.077425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.077432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.077812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.077818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.078225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.078232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.078581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.078587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.078835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.078841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.079081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.079087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.079168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.079175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.079469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.079475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.079831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.079838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.080270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.080277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.080620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.080627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.081032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.081039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.081375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.081383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.081629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.081635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.081972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.081980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.082250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.082257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.082609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.082615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.082952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.082959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.083302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.083309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.083586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.083593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.083950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.083956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.084282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.084289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.084628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.084634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.084886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.084893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.085247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.085253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.085569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.085575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.085772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.085779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-06-10 10:54:36.086035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-06-10 10:54:36.086042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.086389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.086395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.086754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.086760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.087116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.087122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.087479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.087485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.087920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.087926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.088284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.088298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.088429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.088437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.088699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.088705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.089062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.089069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.089431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.089437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.089815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.089822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.090084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.090091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.090365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.090372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.090696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.090703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.091058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.091066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.091414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.091420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.091792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.091799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.092018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.092025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.092215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.092222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.092660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.092667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.092895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.092901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.093263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.093270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.093441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.093449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.093797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.093804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.094165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.094171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.094496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.094504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.094879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.094885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.095233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.095239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.095624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.095631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.095985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.095992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.096339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.096352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.096589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.096596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.096957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.096963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.097290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.097296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.097621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.097635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.097995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.098001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.098342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.015 [2024-06-10 10:54:36.098350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.015 qpair failed and we were unable to recover it. 00:29:12.015 [2024-06-10 10:54:36.098707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.098713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.098957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.098964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.099324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.099331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.099611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.099617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.099972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.099978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.100321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.100327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.100520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.100526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.100856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.100864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.101217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.101224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.101437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.101445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.101812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.101818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.102195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.102201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.102568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.102575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.102953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.102959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.103384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.103390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.103669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.103675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.104031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.104037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.104405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.104414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.104776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.104782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.105116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.105123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.105442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.105448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.105701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.105707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.106067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.106074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.106397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.106405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.106649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.106655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.106850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.106857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.107217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.107224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.107629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.107636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.108041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.108048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.108409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.108415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.108784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.108791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.109173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.109180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.109387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.109393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.109741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.109747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.110082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.110088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.110448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.110455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.110817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.110823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.111101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.016 [2024-06-10 10:54:36.111107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.016 qpair failed and we were unable to recover it. 00:29:12.016 [2024-06-10 10:54:36.111498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.111504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.111842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.111848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.112085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.112092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.112470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.112477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.112810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.112816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.113174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.113180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.113427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.113434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.113670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.113677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.113943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.113950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.114308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.114315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.114669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.114676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.114928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.114934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.115302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.115309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.115673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.115679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.116014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.116020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.116395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.116402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.116834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.116841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.117087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.117093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.117358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.117364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.117632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.117640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.117995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.118001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.118333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.118340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.118715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.118722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.119055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.119062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.119416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.119422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.119809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.119815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.120156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.120162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.120513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.120520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.120894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.120900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.121241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.121250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.121499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.121505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.121846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.121853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.017 [2024-06-10 10:54:36.122207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.017 [2024-06-10 10:54:36.122214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.017 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.122577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.122584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.122898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.122905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.123273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.123281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.123682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.123688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.124025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.124031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.124369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.124376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.124748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.124754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.125094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.125100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.125510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.125516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.125850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.125856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.126247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.126255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.126618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.126624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.126962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.126968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.127450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.127477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.127825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.127834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.128189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.128196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.128551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.128559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.128898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.128904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.129277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.129284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.129642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.129648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.130011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.130017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.130451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.130457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.130661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.130668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.131024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.131030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.131365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.131372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.131738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.131744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.132082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.132093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.132446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.132453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.132709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.132715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.133057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.133064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.018 [2024-06-10 10:54:36.133299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.018 [2024-06-10 10:54:36.133306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.018 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.133693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.133699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.133836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.133842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.134151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.134157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.134509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.134516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.134865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.134871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.135204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.135210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.135566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.135573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.135798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.135806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.136162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.136169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.136447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.136455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.136712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.136719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.137064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.137071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.137428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.137435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.137768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.137774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.138134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.138140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.138461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.138468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.138802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.138809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.139188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.139194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.139547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.139554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.139909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.139915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.140268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.140275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.140610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.140616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.140791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.140799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.141154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.141160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.141484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.141491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.141847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.141853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.142186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.142193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.142629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.142635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.142985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.142998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.143354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.143361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.143681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.143688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.144048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.144056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.144405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.144412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.144749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.144755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.019 qpair failed and we were unable to recover it. 00:29:12.019 [2024-06-10 10:54:36.145109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.019 [2024-06-10 10:54:36.145116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.145487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.145496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.145831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.145837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.146195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.146201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.146541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.146547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.146903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.146909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.147267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.147273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.147635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.147642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.147870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.147876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.148134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.148140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.148393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.148400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.148768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.148774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.149110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.149116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.149460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.149467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.149824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.149830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.150171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.150179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.150530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.150536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.150867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.150875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.151065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.151073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.151424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.151431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.151771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.151777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.152160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.152167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.152523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.152530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.152864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.152870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.153230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.153247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.153613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.153619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.153954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.153960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.154316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.154323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.154678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.154685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.155022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.155028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.155397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.155403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.155601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.155609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.155962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.155968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.156312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.156319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.156640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.156646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.157000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.020 [2024-06-10 10:54:36.157006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.020 qpair failed and we were unable to recover it. 00:29:12.020 [2024-06-10 10:54:36.157346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.157353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.157583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.157590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.157929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.157935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.158275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.158282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.158597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.158603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.158977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.158986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.159340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.159346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.159503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.159511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.159900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.159907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.160251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.160258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.160617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.160624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.160961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.160967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.161259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.161265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.161626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.161632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.161969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.161975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.162347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.162354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.162706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.162712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.163052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.163059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.163414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.163421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.163754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.163761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.164117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.164123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.164499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.164506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.164873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.164880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.165271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.165277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.165517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.165523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.165919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.165925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.166313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.166319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.166673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.166679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.167034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.167040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.167387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.167394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.167721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.167727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.168083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.168089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.168422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.168429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.168798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.168805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.169144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.169150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.169370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.169377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.021 qpair failed and we were unable to recover it. 00:29:12.021 [2024-06-10 10:54:36.169716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.021 [2024-06-10 10:54:36.169722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.170134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.170140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.170481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.170487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.170844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.170850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.171185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.171192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.171545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.171551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.171878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.171884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.172092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.172099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.172409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.172416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.172795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.172803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.173160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.173167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.173527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.173533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.173875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.173882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.174260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.174267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.174572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.174578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.174935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.174942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.175303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.175309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.175649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.175655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.175894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.175900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.176248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.176255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.176458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.176465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.176809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.176815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.177008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.177015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.177390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.177403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.177757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.177763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.178139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.178146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.178536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.178543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.178902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.178908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.179165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.179172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.179548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.179555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.179888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.179894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.180249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.180256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.180621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.180628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.180782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.180789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.022 [2024-06-10 10:54:36.181040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.022 [2024-06-10 10:54:36.181047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.022 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.181404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.181410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.181745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.181752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.182129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.182135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.182382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.182389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.182661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.182667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.183012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.183018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.183375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.183382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.183750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.183756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.183909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.183916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.184237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.184254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.184625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.184631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.184954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.184960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.185302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.185309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.185649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.185656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.186038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.186045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.186391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.186397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.186797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.186804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.187136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.187142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.187547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.187554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.187912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.187919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.188224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.188230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.188571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.188578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.188921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.188928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.189327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.189334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.189671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.189678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.190037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.190050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.190431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.190437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.190773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.190779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.191143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.023 [2024-06-10 10:54:36.191158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.023 qpair failed and we were unable to recover it. 00:29:12.023 [2024-06-10 10:54:36.191531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.191538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.191908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.191914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.192182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.192188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.192368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.192376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.192729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.192736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.193070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.193076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.193394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.193401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.193773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.193779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.194113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.194119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.194463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.194469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.194826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.194833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.195187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.195193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.195547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.195555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.195815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.195821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.196161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.196168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.196527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.196533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.196874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.196881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.197105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.197112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.197414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.197421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.197781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.197788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.198145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.198151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.198547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.198554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.198877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.198883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.199104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.199110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.199450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.199457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.199845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.199855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.200214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.200221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.200458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.200464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.200726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.200732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.201082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.201089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.201445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.201452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.201788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.201794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.202022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.202029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.202378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.202384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.202729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.202736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.203091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.203098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.203450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.203456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.203835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.024 [2024-06-10 10:54:36.203842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.024 qpair failed and we were unable to recover it. 00:29:12.024 [2024-06-10 10:54:36.204208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.204215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.204578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.204585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.204917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.204924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.205170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.205177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.205446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.205454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.205810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.205817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.206172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.206179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.206412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.206420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.206778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.206785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.207165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.207172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.207556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.207564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.207940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.207948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.208301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.208307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.208660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.208666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.209019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.209025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.209362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.209368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.209740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.209746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.210089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.210095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.210407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.210413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.210767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.210773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.211134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.211140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.211467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.211474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.211828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.211835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.212171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.212178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.212534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.212540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.212875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.212881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.213241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.213250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.213581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.213588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.213931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.213937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.214299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.214306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.214669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.214676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.215012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.215019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.215279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.215285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.215628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.215634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.215991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.215998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.216221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.025 [2024-06-10 10:54:36.216229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.025 qpair failed and we were unable to recover it. 00:29:12.025 [2024-06-10 10:54:36.216585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.216592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.216955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.216962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.217316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.217322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.217680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.217687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.218046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.218052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.218391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.218398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.218760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.218766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.219106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.219112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.219478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.219485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.219849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.219855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.220192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.220198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.220558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.220565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.220945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.220951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.221284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.221291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.221658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.221665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.222039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.222045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.222382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.222389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.222747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.222753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.223092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.223098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.223327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.223333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.223696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.223702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.224063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.224070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.224386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.224393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.224665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.224672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.225026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.225032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.225413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.225419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.225600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.225608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.225871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.225877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.226070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.226076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.226400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.226407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.226778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.226786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.227175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.227182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.227531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.227537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.227909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.227916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.026 [2024-06-10 10:54:36.228275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.026 [2024-06-10 10:54:36.228282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.026 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.228630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.228636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.229035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.229042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.229406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.229412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.229799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.229805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.230147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.230153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.230519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.230531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.230894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.230900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.231240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.231250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.231495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.231501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.231854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.231860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.232202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.232209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.232476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.232483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.232820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.232826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.233183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.233189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.233409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.233416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.233759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.233765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.234119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.234126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.234493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.234500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.234857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.234864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.235218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.235226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.235596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.235603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.235935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.235941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.236112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.236119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.236470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.236476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.236835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.236841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.237101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.237107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.237321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.237329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.237686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.237692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.238033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.238039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.238278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.238285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.238563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.238569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.238928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.238935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.239310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.027 [2024-06-10 10:54:36.239318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.027 qpair failed and we were unable to recover it. 00:29:12.027 [2024-06-10 10:54:36.239696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.239703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.240057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.240064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.240252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.240259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.240581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.240589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.240947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.240953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.241333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.241339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.241680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.241686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.242047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.242053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.242408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.242415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.242750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.242756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.243112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.243118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.243495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.243502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.243838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.243844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.244209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.244215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.244561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.244568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.244905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.244911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.245164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.245170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.245531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.245538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.245885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.245892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.246265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.246272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.246608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.246615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.246860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.246866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.247184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.247190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.247557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.247564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.247922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.247928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.248276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.248282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.248622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.248629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.249004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.249010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.028 qpair failed and we were unable to recover it. 00:29:12.028 [2024-06-10 10:54:36.249343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.028 [2024-06-10 10:54:36.249351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.249711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.249717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.250053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.250059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.250413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.250420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.250623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.250630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.250935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.250942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.251299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.251306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.251488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.251494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.251804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.251810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.252072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.252079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.252439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.252446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.252789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.252795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.253148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.253155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.253304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.253312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.253598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.253604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.253955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.253962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.254180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.254187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.254553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.254560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.254988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.254995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.255332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.255338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.255679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.255685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.256031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.256037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.256375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.256381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.256724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.256731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.257096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.257102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.257451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.257457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.257770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.257777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.258142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.258148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.258359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.258366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.258608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.258615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.258981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.258987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.259368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.259375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.259730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.259737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.260094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.260101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.260456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.260462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.260800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.260806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.029 qpair failed and we were unable to recover it. 00:29:12.029 [2024-06-10 10:54:36.260990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.029 [2024-06-10 10:54:36.260997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.261315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.261323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.261678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.261685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.262023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.262029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.262405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.262411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.262767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.262774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.263066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.263074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.263435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.263442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.263778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.263784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.264154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.264169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.264534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.264540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.264874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.264881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.265235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.265244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.265425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.265432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.265748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.265754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.266098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.266104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.266530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.266537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.266869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.266876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.267232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.267239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.267617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.267623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.267919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.267926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.268281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.268287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.268654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.268661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.269036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.269042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.269376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.269383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.269762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.269768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.270107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.270113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.270471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.270478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.270912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.270918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.271167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.271173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.271423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.271430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.271651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.271658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.271933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.271939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.272285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.272292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.272634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.272640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.272991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.272997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.273354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.273360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.030 [2024-06-10 10:54:36.273694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.030 [2024-06-10 10:54:36.273701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.030 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.274055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.274061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.274402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.274409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.274797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.274810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.275168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.275174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.275508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.275515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.275875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.275882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.276101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.276109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.276465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.276471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.276807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.276815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.277047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.277054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.277420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.277426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.277734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.277741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.277913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.277920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.278299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.278305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.278665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.278671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.279026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.279033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.031 [2024-06-10 10:54:36.279316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.031 [2024-06-10 10:54:36.279323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.031 qpair failed and we were unable to recover it. 00:29:12.305 [2024-06-10 10:54:36.279662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.305 [2024-06-10 10:54:36.279670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-06-10 10:54:36.279950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.305 [2024-06-10 10:54:36.279957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-06-10 10:54:36.280311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.305 [2024-06-10 10:54:36.280317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-06-10 10:54:36.280658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.305 [2024-06-10 10:54:36.280665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.305 qpair failed and we were unable to recover it. 00:29:12.305 [2024-06-10 10:54:36.281036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.305 [2024-06-10 10:54:36.281043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.281297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.281304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.281658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.281664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.282003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.282009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.282364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.282371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.282767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.282773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.283018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.283025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.283363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.283370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.283703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.283709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.284037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.284044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.284404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.284410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.284795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.284801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.285194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.285201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.285555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.285562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.285901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.285907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.286271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.286278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.286634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.286640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.286992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.286998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.287332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.287339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.287559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.287565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.287771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.287778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.288026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.288032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.288393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.288399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.288733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.288739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.288990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.288996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.289343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.289350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.289707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.289713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.290053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.290060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.290410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.290417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.290660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.290667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.291031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.291038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.291301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.291307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.291641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.291647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.292001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.292008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.292365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.292371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.292698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.292704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.292919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.292926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.293247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.306 [2024-06-10 10:54:36.293254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.306 qpair failed and we were unable to recover it. 00:29:12.306 [2024-06-10 10:54:36.293605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.293613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1023940 Killed "${NVMF_APP[@]}" "$@" 00:29:12.307 [2024-06-10 10:54:36.293973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.293980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.294314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.294321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:12.307 [2024-06-10 10:54:36.294694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.294700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:12.307 [2024-06-10 10:54:36.295038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.295044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:12.307 [2024-06-10 10:54:36.295280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.295287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.307 [2024-06-10 10:54:36.295679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.295686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.296022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.296028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.296408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.296414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.296765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.296772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.297137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.297144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.297356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.297363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.297722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.297729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.298083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.298091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.298339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.298346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.298730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.298737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.299147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.299154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.299493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.299499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.299837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.299844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.300112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.300119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.300279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.300286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.300476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.300483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.300805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.300812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.301165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.301171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.301521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.301528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.301896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.301903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.302161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.302168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.302433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.302440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.302778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.302785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 [2024-06-10 10:54:36.302892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.302899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1024965 00:29:12.307 [2024-06-10 10:54:36.303158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.303166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1024965 00:29:12.307 [2024-06-10 10:54:36.303499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.303507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1024965 ']' 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.307 [2024-06-10 10:54:36.303876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.303883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.307 qpair failed and we were unable to recover it. 00:29:12.307 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:12.307 [2024-06-10 10:54:36.304248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.307 [2024-06-10 10:54:36.304256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.308 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:12.308 [2024-06-10 10:54:36.304660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.304667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 10:54:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 [2024-06-10 10:54:36.305013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.305021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.305351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.305359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.305588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.305596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.305952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.305960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.306346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.306353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.306795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.306803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.307164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.307172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.307415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.307422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.307802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.307811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.308175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.308183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.308534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.308541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.308901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.308908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.309310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.309317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.309715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.309723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.310078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.310085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.310307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.310315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.310690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.310698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.311047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.311055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.311422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.311430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.311783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.311791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.312205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.312212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.312485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.312492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.312859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.312866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.313089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.313096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.313296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.313303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.313637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.313644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.313908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.313915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.314172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.314181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.314261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.314268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.314530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.314537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.314891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.314899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.315230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.315236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.315500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.315507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.315888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.315894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.316248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.308 [2024-06-10 10:54:36.316255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.308 qpair failed and we were unable to recover it. 00:29:12.308 [2024-06-10 10:54:36.316445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.316452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.316778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.316785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.317173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.317180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.317536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.317542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.317843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.317849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.318182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.318188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.318440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.318447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.318688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.318695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.318939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.318946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.319307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.319318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.319694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.319707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.320062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.320069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.320436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.320443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.320801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.320808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.321188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.321195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.321542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.321549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.321917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.321923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.322167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.322173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.322516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.322523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.322784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.322791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.323002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.323008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.323330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.323337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.323673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.323680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.323944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.323951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.324325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.324332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.324762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.324769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.325028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.325034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.325282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.325289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.325497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.325504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.325827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.325834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.326027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.326033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.326409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.326415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.326539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.326547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.326776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.326782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.327025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.327032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.327381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.327388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.327769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.327775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.328164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.309 [2024-06-10 10:54:36.328170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.309 qpair failed and we were unable to recover it. 00:29:12.309 [2024-06-10 10:54:36.328304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.328310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.328671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.328677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.329048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.329054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.329418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.329424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.329791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.329798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.330059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.330066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.330408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.330415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.330783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.330789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.331166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.331173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.331510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.331517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.331878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.331885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.332137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.332144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.332549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.332556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.332960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.332968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.333220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.333227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.333578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.333587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.333915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.333922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.334134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.334142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.334510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.334518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.334903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.334910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.335271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.335279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.335410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.335418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.335645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.335652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.336018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.336024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.336411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.336418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.336758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.336764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.337130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.337136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.337517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.337524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.337884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.337891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.338254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.338261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.310 [2024-06-10 10:54:36.338616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.310 [2024-06-10 10:54:36.338623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.310 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.338981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.338987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.339342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.339349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.339607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.339614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.340002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.340011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.340409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.340416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.340770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.340776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.341114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.341120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.341461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.341468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.341837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.341844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.341947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.341954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.342295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.342302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.342724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.342731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.343189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.343196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.343385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.343392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.343734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.343740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.343894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.343901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.344233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.344240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.344628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.344634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.344989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.344996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.345216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.345223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.345577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.345585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.345953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.345960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.346183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.346190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.346551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.346558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.346957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.346964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.347209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.347216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.347526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.347534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.347749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.347756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.348121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.348128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.348272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.348280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.348613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.348620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.349008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.349014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.349366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.349374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.349747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.349754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.350192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.350199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.350531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.350539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.350799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.350805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.351033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.311 [2024-06-10 10:54:36.351039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.311 qpair failed and we were unable to recover it. 00:29:12.311 [2024-06-10 10:54:36.351442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.351448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.351803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.351817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.352174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.352181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.352512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.352519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.352876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.352882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.353223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.353230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.353600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.353608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.353835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.353842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.354269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.354276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.354540] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:29:12.312 [2024-06-10 10:54:36.354583] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.312 [2024-06-10 10:54:36.354636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.354643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.354998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.355004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.355233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.355240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.355441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.355449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.355689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.355696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.356030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.356038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.356415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.356422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.356682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.356689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.357077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.357084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.357369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.357376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.357763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.357770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.358134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.358141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.358513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.358520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.358752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.358759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.359062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.359069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.359422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.359430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.359680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.359688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.359998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.360005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.360358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.360366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.360724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.360730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.361113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.361120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.361369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.361377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.361744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.361751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.362111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.362118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.362432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.362439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.362798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.362805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.363062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.363068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.363448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.312 [2024-06-10 10:54:36.363455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.312 qpair failed and we were unable to recover it. 00:29:12.312 [2024-06-10 10:54:36.363845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.363852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.364212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.364219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.364519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.364526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.364884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.364891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.365274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.365282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.365651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.365659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.365876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.365883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.366068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.366078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.366288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.366296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.366663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.366670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.367029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.367036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.367223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.367230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.367587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.367595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.367964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.367971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.368329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.368336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.368692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.368699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.369079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.369087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.369353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.369360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.369717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.369724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.370085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.370092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.370324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.370331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.370646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.370654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.371020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.371027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.371386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.371393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.371604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.371611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.371814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.371821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.372134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.372141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.372523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.372530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.372804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.372811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.373171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.373178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.373523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.373531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.373769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.373775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.374141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.374148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.374531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.374538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.374913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.374920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.375246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.375253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.375608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.375616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.375950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.375956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.313 [2024-06-10 10:54:36.376281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.313 [2024-06-10 10:54:36.376287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.313 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.376662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.376669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.376883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.376890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.377251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.377258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.377485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.377491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.377870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.377876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.378091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.378098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.378441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.378448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.378788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.378794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.379134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.379142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.379506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.379513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.379871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.379885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.380251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.380257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.380636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.380643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.381024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.381031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.381367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.381374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.381686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.381692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.382030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.382036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.382392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.382399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.382760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.382767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.383146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.383153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.383530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.383537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.383875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.383882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.384104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.384111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.384469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.384476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.384724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.384731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.385091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.385098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.385348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.385354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.385406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.385413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.385748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.385755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.386094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.386101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.386442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.386449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.314 [2024-06-10 10:54:36.386711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.386717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.387077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.387084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.387338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.387345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.314 qpair failed and we were unable to recover it. 00:29:12.314 [2024-06-10 10:54:36.387736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.314 [2024-06-10 10:54:36.387742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.388122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.388129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.388379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.388385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.388608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.388615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.388977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.388984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.389310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.389317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.389537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.389544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.389983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.389990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.390388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.390395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.390727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.390734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.391093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.391100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.391447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.391454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.391804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.391811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.392168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.392175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.392412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.392422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.392664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.392671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.393022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.393029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.393408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.393415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.393596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.393603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.393860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.393866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.394096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.394102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.394449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.394456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.394814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.394821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.395159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.395165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.395289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.395296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.395554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.395561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.395940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.395947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.396281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.396289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.396594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.396601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.396937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.396944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.397222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.397229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.397577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.397584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.397957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.397963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.398222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.398229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.398552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.398559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.398907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.398914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.315 qpair failed and we were unable to recover it. 00:29:12.315 [2024-06-10 10:54:36.399301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.315 [2024-06-10 10:54:36.399307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.399440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.399447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.399782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.399788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.399976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.399983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.400324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.400334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.400702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.400710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.400963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.400970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.401330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.401337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.401519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.401526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.401913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.401919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.402274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.402281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.402641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.402647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.403001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.403009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.403360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.403368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.403673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.403680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.404066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.404073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.404421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.404429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.404794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.404802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.405184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.405192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.405622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.405629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.405967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.405974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.406353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.406361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.406746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.406753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.406969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.406976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.407367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.407374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.407733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.407741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.408123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.408130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.408400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.408408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.408754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.408762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.409097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.409105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.409365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.409372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.409669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.409676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.410030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.410038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.410415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.410423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.410764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.410771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.411168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.411176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.411412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.411419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.411777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.411784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.316 [2024-06-10 10:54:36.412156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.316 [2024-06-10 10:54:36.412163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.316 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.412514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.412521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.412929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.412936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.413271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.413279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.413622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.413630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.413988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.413995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.414333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.414340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.414548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.414555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.414705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.414712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.415039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.415047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.415408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.415416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.415780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.415787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.416146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.416153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.416365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.416373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.416708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.416714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.417019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.417026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.417387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.417394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.417754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.417762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.418124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.418131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.418543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.418550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.418940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.418949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.419190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.419196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.419567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.419574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.419828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.419836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.420193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.420200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.420539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.420546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.420906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.420913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.421293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.421300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.421709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.421716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.422006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.422013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.422362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.422369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.422620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.422627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.422982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.422988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.423322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.423330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.423595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.423603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.423863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.423869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.424127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.424134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.424392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.424398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.424740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.317 [2024-06-10 10:54:36.424747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.317 qpair failed and we were unable to recover it. 00:29:12.317 [2024-06-10 10:54:36.425123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.425129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.425502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.425509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.425837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.425843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.426196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.426203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.426453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.426460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.426685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.426692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.426947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.426955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.427311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.427319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.427658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.427666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.427990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.427998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.428353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.428360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.428699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.428706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.429071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.429078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.429441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.429448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.429832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.429838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.430206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.430213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.430589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.430596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.430931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.430938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.431298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.431305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.431533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.431539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.431880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.431887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.432120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.432128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.432177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.432184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.432493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.432500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.432861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.432868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.433176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.433183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.433362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.433370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.433724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.433731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.434072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.434079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.434476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.434484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.434840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.434847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.435187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.435194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.435521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.435528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.435842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.435848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.436227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.436233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.436570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.436578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.436934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.436941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.437198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.437204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.318 qpair failed and we were unable to recover it. 00:29:12.318 [2024-06-10 10:54:36.437546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.318 [2024-06-10 10:54:36.437554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.437914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.437922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.438274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.438281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.438638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.438645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.439003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.439010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.439342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.439350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.439420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:12.319 [2024-06-10 10:54:36.439715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.439722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.440059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.440066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.440318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.440325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.440520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.440527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.440862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.440869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.441234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.441249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.441599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.441606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.441960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.441966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.442276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.442283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.442643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.442650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.443003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.443010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.443370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.443377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.443750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.443756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.444096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.444103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.444466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.444475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.444839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.444845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.445266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.445273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.445620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.445627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.445988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.445996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.446383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.446390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.446757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.446765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.447148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.447155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.447507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.447513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.447877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.447885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.447959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.447965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.448333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.448340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.319 [2024-06-10 10:54:36.448578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.319 [2024-06-10 10:54:36.448585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.319 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.448955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.448961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.449299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.449306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.449662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.449669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.450008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.450016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.450377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.450383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.450744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.450751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.451085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.451092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.451426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.451434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.451604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.451612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.452013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.452020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.452235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.452246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.452623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.452629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.452971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.452978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.453334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.453341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.453707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.453714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.454098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.454104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.454449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.454456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.454811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.454817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.455031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.455037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.455372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.455379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.455561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.455569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.455943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.455950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.456305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.456312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.456671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.456679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.457042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.457050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.457404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.457411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.457763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.457771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.458149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.458155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.458507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.458515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.458873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.458880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.459205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.459212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.459525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.459532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.459879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.459886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.460099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.460105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.460462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.460468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.460781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.460788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.461143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.461149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.320 [2024-06-10 10:54:36.461593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.320 [2024-06-10 10:54:36.461600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.320 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.461934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.461941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.462134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.462142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.462500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.462506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.462862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.462869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.463210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.463217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.463586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.463597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.463956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.463962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.464172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.464179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.464605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.464612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.464819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.464825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.465185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.465191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.465555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.465561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.465925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.465937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.466331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.466338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.466705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.466712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.467071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.467077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.467436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.467442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.467678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.467684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.468039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.468046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.468408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.468415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.468790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.468797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.469177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.469183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.469381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.469388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.469762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.469768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.469975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.469981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.470364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.470371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.470743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.470751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.471069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.471076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.471428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.471436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.471800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.471806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.472166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.472173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.472504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.472512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.472869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.472877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.473224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.473231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.473603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.473610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.473968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.473975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.474323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.474330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.474710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.321 [2024-06-10 10:54:36.474717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.321 qpair failed and we were unable to recover it. 00:29:12.321 [2024-06-10 10:54:36.475052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.475059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.475493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.475500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.475863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.475869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.476251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.476258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.476595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.476603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.476959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.476967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.477301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.477308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.477488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.477496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.477686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.477694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.478022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.478028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.478206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.478212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.478442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.478449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.478710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.478717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.479036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.479043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.479407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.479414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.479663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.479669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.480022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.480029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.480444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.480451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.480812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.480820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.481045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.481052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.481291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.481298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.481659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.481666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.482000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.482006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.482359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.482367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.482715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.482722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.483083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.483090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.483448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.483455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.483625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.483632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.483951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.483958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.484309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.484316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.484587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.484595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.484973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.484979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.485197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.485204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.485525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.485532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.485741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.485748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.486113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.486119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.486377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.486384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.486752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.486758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.322 [2024-06-10 10:54:36.487006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.322 [2024-06-10 10:54:36.487013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.322 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.487268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.487275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.487534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.487541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.487926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.487932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.488290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.488297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.488572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.488579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.488790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.488797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.489159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.489166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.489514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.489521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.489900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.489908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.490234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.490240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.490591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.490598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.490954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.490960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.491151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.491159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.491492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.491498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.491810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.491817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.492213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.492220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.492564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.492571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.492956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.492963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.493268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.493276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.493660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.493667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.493834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.493842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.494168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.494175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.494568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.494576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.494952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.494967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.495350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.495358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.495698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.495705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.496060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.496067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.496427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.496434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.496763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.496770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.496960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.496966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.497335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.497343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.497707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.497713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.498052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.498058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.498444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.498451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.498718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.498725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.499072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.499079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.499434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.499441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.499801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.323 [2024-06-10 10:54:36.499807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.323 qpair failed and we were unable to recover it. 00:29:12.323 [2024-06-10 10:54:36.500019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.500026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.500395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.500402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.500652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.500658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.501001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.501008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.501356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.501363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.501537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.501544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.501749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.501755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.502093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.502100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.502460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.502466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.502651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.502658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.502928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.502936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.503297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.503304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.503727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.503733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.504068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.504074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.504330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.504336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.504512] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.324 [2024-06-10 10:54:36.504539] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.324 [2024-06-10 10:54:36.504547] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.324 [2024-06-10 10:54:36.504553] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.324 [2024-06-10 10:54:36.504558] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.324 [2024-06-10 10:54:36.504711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.504717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.504702] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 5 00:29:12.324 [2024-06-10 10:54:36.504858] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 6 00:29:12.324 [2024-06-10 10:54:36.505017] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 4 00:29:12.324 [2024-06-10 10:54:36.505066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.505072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.505019] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 7 00:29:12.324 [2024-06-10 10:54:36.505344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.505351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.505736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.505743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.506082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.506088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.506454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.506461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.506825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.506832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.507088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.507094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.507385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.507392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.507664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.507670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.508034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.508041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.508295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.508302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.508561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.508568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.508796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.508802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.324 [2024-06-10 10:54:36.509171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.324 [2024-06-10 10:54:36.509178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.324 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.509527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.509534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.509882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.509888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.510153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.510159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.510514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.510521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.510764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.510771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.511017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.511024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.511383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.511390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.511595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.511601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.511967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.511973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.512354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.512361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.512737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.512744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.513128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.513134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.513386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.513393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.513619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.513625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.513952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.513958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.514324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.514331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.514704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.514711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.514989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.514998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.515206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.515213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.515571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.515578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.515792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.515798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.516008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.516015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.516393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.516400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.516634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.516641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.516821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.516828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.517077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.517084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.517434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.517442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.517559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.517565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.517913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.517920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.518255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.518262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.518504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.518510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.518867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.518874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.519249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.519256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.519598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.519606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.520012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.520019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.520369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.520376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.520752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.520760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.521120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.325 [2024-06-10 10:54:36.521127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.325 qpair failed and we were unable to recover it. 00:29:12.325 [2024-06-10 10:54:36.521484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.521492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.521858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.521865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.522017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.522024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.522258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.522266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.522605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.522612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.522969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.522976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.523313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.523321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.523518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.523525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.523769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.523777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.524154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.524162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.524515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.524523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.524857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.524864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.525226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.525232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.525451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.525459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.525705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.525711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.525987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.525994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.526203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.526210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.526493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.526500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.526682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.526689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.527130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.527141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.527480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.527488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.527710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.527716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.528107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.528114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.528506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.528513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.528780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.528788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.529144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.529152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.529519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.529526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.529760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.529767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.530133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.530141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.530537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.530544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.530889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.530896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.531119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.531125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.531343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.531350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.531582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.531589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.531995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.532003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.532364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.532371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.532640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.532647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.533006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.533012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.326 qpair failed and we were unable to recover it. 00:29:12.326 [2024-06-10 10:54:36.533354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.326 [2024-06-10 10:54:36.533361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.533738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.533745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.534107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.534113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.534385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.534392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.534760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.534766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.535104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.535110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.535445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.535452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.535808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.535815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.536056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.536063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.536434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.536441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.536628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.536635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.536960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.536967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.537027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.537033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.537373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.537380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.537639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.537645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.537889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.537896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.538274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.538282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.538654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.538661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.538968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.538975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.539350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.539357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.539429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.539435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.539802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.539809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.540116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.540122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.540511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.540517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.540853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.540859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.541094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.541101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.541455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.541462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.541886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.541892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.542091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.542097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.542353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.542359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.542729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.542735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.543087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.543094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.543461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.543476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.543698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.543706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.544056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.544063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.544291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.544298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.544663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.544670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.545028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.545034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-06-10 10:54:36.545239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.327 [2024-06-10 10:54:36.545249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.545605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.545612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.546004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.546010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.546384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.546392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.546786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.546792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.547000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.547007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.547401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.547408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.547780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.547787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.547860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.547867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.548168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.548174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.548522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.548529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.548915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.548921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.549190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.549196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.549549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.549556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.549606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.549613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.550014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.550020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.550246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.550253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.550481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.550487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.550709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.550716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.550874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.550881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.551241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.551252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.551534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.551541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.551932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.551938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.552009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.552017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.552413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.552421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.552742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.552748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.553176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.553182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.553522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.553528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.553693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.553699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.553979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.553986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.554373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.554380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.554589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.554597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.554994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.328 [2024-06-10 10:54:36.555001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-06-10 10:54:36.555428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.555435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.555656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.555663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.555961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.555967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.556311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.556318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.556813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.556819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.557075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.557081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.557420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.557427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.557800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.557808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.558032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.558040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.558407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.558414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.558793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.558799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.559176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.559182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.559557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.559563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.559691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.559697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.560299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.560342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.560609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.560620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.560854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.560863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13458c0 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.561154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.561162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.561537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.561543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.561839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.561845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.562223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.562229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.562442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.562450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.562793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.562800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.563143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.563150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.563273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.563279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.563343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.563350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.563670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.563676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.564030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.564036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.564245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.564252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.564618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.564624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.564934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.564942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.565344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.565350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.565633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.565639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.565829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.565835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.566221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.566229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.566599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.566605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.566805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.566811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.567149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.567156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-06-10 10:54:36.567549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.329 [2024-06-10 10:54:36.567555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.567936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.567944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.568190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.568197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.568556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.568563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.568928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.568935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.569281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.569288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.569661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.569668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.569887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.569893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.570249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.570255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.570470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.570476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.570853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.570859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.571178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.571185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.571556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.571562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.571949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.571955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.572284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.572291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.572651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.572657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.572914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.572920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.573190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.573196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.573579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.573586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.573834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.573840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.574175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.574187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.574412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.574419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.574617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.574624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.575092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.575098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.575452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.575458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.575851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.575857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.576224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.576230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.576623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.576629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.576976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.576983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.577218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.577226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.577600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.577607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.577791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.577797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.578182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.578190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.578536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.578544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.578895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.578902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.579108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.579115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.579194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.579200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.330 [2024-06-10 10:54:36.579577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.330 [2024-06-10 10:54:36.579584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.330 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.579984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.579992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.580217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.580224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.580594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.580601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.580989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.580997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.581233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.581240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.581599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.581606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.581921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.581929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.582280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.582287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.582546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.582552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.582900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.582906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.583151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.583157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.583395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.583402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.583672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.583679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.584016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.584023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.584249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.584255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.584499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.584506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.584903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.584909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.585083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.585089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.585386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.585392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.585784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.585790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.586023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.586029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.586252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.586258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.586606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.586612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.586874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.586880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.587102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.587108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.587475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.587481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.587890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.587896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.588180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.588186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.588278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.588284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.588645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.588651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.605 qpair failed and we were unable to recover it. 00:29:12.605 [2024-06-10 10:54:36.589001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.605 [2024-06-10 10:54:36.589008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.589230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.589237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.589605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.589612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.590002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.590009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.590371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.590380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.590606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.590613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.590963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.590970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.591329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.591336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.591524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.591530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.591933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.591939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.592277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.592284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.592621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.592627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.592849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.592855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.593110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.593117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.593471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.593477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.593892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.593898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.594107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.594113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.594488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.594494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.594832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.594838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.595015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.595021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.595379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.595386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.595733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.595740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.595979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.595985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.596303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.596310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.596535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.596542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.596792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.596799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.597164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.597171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.597414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.597420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.597725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.597732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.597977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.597984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.598255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.598262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.598480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.598489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.598675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.598682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.599078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.599084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.599431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.599444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.599807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.599813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.606 [2024-06-10 10:54:36.600048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.606 [2024-06-10 10:54:36.600054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.606 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.600413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.600419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.600767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.600773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.601130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.601137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.601403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.601410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.601760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.601767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.602158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.602164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.602380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.602387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.602754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.602760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.602975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.602982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.603212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.603218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.603434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.603441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.603800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.603807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.604010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.604016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.604360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.604367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.604724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.604730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.605225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.605231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.605451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.605458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.605833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.605840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.606205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.606212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.606573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.606580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.606783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.606789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.607167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.607173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.607504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.607511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.607874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.607881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.608219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.608227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.608444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.608450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.608716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.608722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.609118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.609124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.609472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.609479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.609736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.609743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.610110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.610116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.610380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.610386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.610571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.610578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.610836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.610842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.611197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.607 [2024-06-10 10:54:36.611205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.607 qpair failed and we were unable to recover it. 00:29:12.607 [2024-06-10 10:54:36.611399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.611406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.611831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.611837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.611880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.611886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.612091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.612097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.612354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.612360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.612746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.612752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.612967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.612973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.613357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.613364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.613718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.613724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.614087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.614093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.614443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.614450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.614853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.614859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.615210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.615216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.615520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.615533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.615728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.615734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.616087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.616094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.616316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.616322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.616389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.616395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.616556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.616562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.616962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.616968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.617219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.617225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.617575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.617581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.617926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.617933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.618313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.618320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.618671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.618677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.618743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.618749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.619143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.619149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.619611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.619617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.619964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.619971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.620329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.620336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.620569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.620576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.620934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.620941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.621195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.621202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.621554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.621561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.608 qpair failed and we were unable to recover it. 00:29:12.608 [2024-06-10 10:54:36.621775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.608 [2024-06-10 10:54:36.621781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.622144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.622151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.622522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.622529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.622777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.622783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.623146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.623154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.623533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.623542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.623918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.623925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.624289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.624295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.624638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.624645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.625019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.625026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.625245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.625252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.625619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.625625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.625966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.625972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.626347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.626354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.626753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.626759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.627105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.627111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.627350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.627356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.627747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.627753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.628078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.628085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.628445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.628452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.628661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.628667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.629038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.629044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.629259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.629266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.629633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.629640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.629996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.630002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.630336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.630343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.630530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.630537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.630885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.630892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.631207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.631213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.631607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.631613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.631949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.631956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.632179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.632185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.609 [2024-06-10 10:54:36.632402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.609 [2024-06-10 10:54:36.632408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.609 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.632766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.632778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.632966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.632974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.633323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.633329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.633685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.633691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.634032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.634038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.634386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.634393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.634807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.634814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.635166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.635172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.635521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.635527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.635877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.635884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.636241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.636250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.636560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.636566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.636934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.636941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.637282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.637289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.637648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.637654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.637969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.637975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.638343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.638349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.638699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.638705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.638943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.638950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.639329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.639336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.639682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.639688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.640041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.640048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.640213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.640219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.640593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.640600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.640959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.640966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.641224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.641231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.641428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.641434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.641680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.641686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.641891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.641898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.642251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.642258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.610 [2024-06-10 10:54:36.642642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.610 [2024-06-10 10:54:36.642649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.610 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.643047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.643053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.643249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.643256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.643577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.643583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.643789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.643795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.644131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.644137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.644505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.644512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.644869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.644875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.645232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.645239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.645460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.645467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.645656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.645663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.645914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.645921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.646266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.646273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.646513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.646520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.646853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.646859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.647197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.647204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.647425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.647431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.647679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.647685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.648046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.648052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.648329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.648336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.648708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.648714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.649052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.649058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.649415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.649423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.649649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.649656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.649968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.649974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.650195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.650202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.650556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.650563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.650778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.650784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.650939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.650946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.651109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.651116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.651508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.651515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.651733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.651740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.652129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.652136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.652582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.652589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.653028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.653036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.653249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.653256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.611 [2024-06-10 10:54:36.653562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.611 [2024-06-10 10:54:36.653571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.611 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.653777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.653785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.653951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.653958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.654330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.654338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.654730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.654737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.654930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.654938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.655339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.655347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.655571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.655578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.655937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.655945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.656304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.656311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.656473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.656481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.656863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.656871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.657093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.657100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.657378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.657387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.657760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.657768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.658150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.658158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.658365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.658374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.658583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.658591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.658827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.658835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.659169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.659178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.659384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.659393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.659759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.659767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.660123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.660130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.660491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.660499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.660858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.660868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.661227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.661235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.661440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.661452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.661826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.661834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.662057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.662065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.662429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.662437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.662794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.662802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.663027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.663035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.663298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.663306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.663668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.663677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.663877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.663885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.664230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.664239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.664432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.664441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.612 qpair failed and we were unable to recover it. 00:29:12.612 [2024-06-10 10:54:36.664812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.612 [2024-06-10 10:54:36.664819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.665027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.665034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.665367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.665375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.665740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.665748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.666112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.666120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.666343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.666350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.666621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.666631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.666992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.667001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.667403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.667411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.667779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.667786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.668176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.668184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.668406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.668414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.668762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.668770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.668964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.668971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.669308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.669317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.669685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.669693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.670060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.670069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.670429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.670438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.670821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.670828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.671186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.671193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.671379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.671386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.671605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.671612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.671994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.672002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.672207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.672215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.672620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.672629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.672989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.672997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.673189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.673197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.673521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.673529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.673890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.673897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.674257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.674267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.674618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.674626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.674983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.674992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.675370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.675378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.675754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.675763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.676143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.676151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.676361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.613 [2024-06-10 10:54:36.676370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.613 qpair failed and we were unable to recover it. 00:29:12.613 [2024-06-10 10:54:36.676610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.676618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.676981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.676989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.677185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.677193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.677572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.677581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.677944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.677953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.678315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.678324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.678546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.678555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.678914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.678923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.679147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.679157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.679473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.679481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.679862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.679871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.680229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.680238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.680456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.680464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.680829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.680837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.681193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.681202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.681574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.681583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.681795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.681804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.682173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.682181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.682450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.682459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.682815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.682824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.683181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.683190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.683550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.683560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.683748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.683757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.684099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.684108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.684464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.684473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.684830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.684839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.685183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.685191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.685551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.685559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.685949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.685957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.614 [2024-06-10 10:54:36.686318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.614 [2024-06-10 10:54:36.686326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.614 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.686717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.686725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.686945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.686953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.687118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.687126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.687498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.687507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.687587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.687593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.687943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.687951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.688171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.688179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.688445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.688453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.688802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.688810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.689167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.689175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.689399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.689408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.689784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.689793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.690150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.690159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.690342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.690351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.690698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.690706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.690758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.690766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.691100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.691109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.691317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.691325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.691641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.691649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.692009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.692016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.692367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.692376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.692629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.692636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.692993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.693001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.693356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.693364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.693585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.693592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.693958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.693966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.694344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.694352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.694716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.694724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.695029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.695037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.695398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.695407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.695763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.695771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.696170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.696178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.696537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.696545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.696810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.696818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.697194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.697202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.697362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.697370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.615 [2024-06-10 10:54:36.697546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.615 [2024-06-10 10:54:36.697554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.615 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.697794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.697802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.698182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.698190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.698551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.698559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.698903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.698911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.699268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.699277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.699599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.699607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.699962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.699971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.700190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.700197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.700553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.700560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.700941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.700949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.701310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.701318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.701687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.701696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.701941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.701949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.702297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.702305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.702612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.702619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.702976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.702984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.703424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.703432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.703785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.703793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.703997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.704004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.704166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.704173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.704526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.704534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.704836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.704845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.705042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.705049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.705222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.705229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.705641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.705649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.705997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.706004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.706347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.706364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.706744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.706752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.707109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.707117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.707316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.707323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.707704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.707712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.708071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.708078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.708277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.708284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.616 [2024-06-10 10:54:36.708605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.616 [2024-06-10 10:54:36.708612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.616 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.708994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.709002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.709441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.709449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.709675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.709682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.710050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.710058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.710118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.710125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.710444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.710452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.710901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.710909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.711259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.711268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.711436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.711444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.711774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.711783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.712141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.712149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.712358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.712366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.712724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.712733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.713112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.713120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.713314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.713321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.713672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.713680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.713870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.713879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.713950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.713957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.714200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.714208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.714572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.714580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.715012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.715020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.715377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.715385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.715765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.715773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.716129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.716137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.716555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.716562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.716924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.716932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.717207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.717215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.717268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.717274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.717636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.717645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.718005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.718014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.718277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.718286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.718681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.718689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.718906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.718913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.719276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.719284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.719642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.719649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.720029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.720037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.720394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.720402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.617 [2024-06-10 10:54:36.720749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.617 [2024-06-10 10:54:36.720757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.617 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.721112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.721120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.721545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.721553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.721911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.721919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.722185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.722192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.722468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.722476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.722864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.722872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.723236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.723247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.723533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.723541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.723901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.723908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.724290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.724299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.724694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.724702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.725070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.725078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.725300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.725308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.725662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.725670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.726045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.726055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.726276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.726284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.726624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.726632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.726826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.726833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.727050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.727059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.727250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.727258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.727589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.727597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.727978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.727986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.728360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.728368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.728591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.728598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.728804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.728812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.729167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.729174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.729532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.729540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.729748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.729756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.730020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.730028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.730253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.730262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.730432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.730440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.730611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.730619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.730958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.730967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.731348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.731356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.618 [2024-06-10 10:54:36.731587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.618 [2024-06-10 10:54:36.731595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.618 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.731962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.731969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.732257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.732265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.732495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.732503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.732863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.732871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.733227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.733236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.733495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.733504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.733884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.733892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.734275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.734283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.734641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.734649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.735014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.735022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.735216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.735223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.735568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.735577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.735761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.735769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.736107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.736115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.736495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.736503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.736825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.736834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.737193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.737201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.737548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.737556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.737909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.737918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.738274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.738284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.738641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.738648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.738867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.738874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.739194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.739201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.739394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.739402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.739765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.739772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.740129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.740137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.619 [2024-06-10 10:54:36.740557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.619 [2024-06-10 10:54:36.740565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.619 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.740919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.740927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.741280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.741289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.741650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.741657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.741918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.741925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.742128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.742136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.742367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.742375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.742746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.742755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.742975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.742983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.743337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.743345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.743528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.743535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.743862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.743870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.744068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.744076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.744455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.744463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.744724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.744732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.745149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.745156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.745415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.745424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.745555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.745572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.745938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.745945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.746303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.746311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.746704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.746711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.746905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.746912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.747316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.747324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.747683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.747691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.748073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.748081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.748439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.748447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.748807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.748815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.749156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.749164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.749526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.749534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.749753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.749761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.750029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.750037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.750390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.750398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.750766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.750775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.751133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.751143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.751449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.751457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.751817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.751826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.752049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.620 [2024-06-10 10:54:36.752057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.620 qpair failed and we were unable to recover it. 00:29:12.620 [2024-06-10 10:54:36.752411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.752418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.752766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.752773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.753135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.753142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.753489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.753498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.753856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.753863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.754268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.754276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.754505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.754512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.754901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.754908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.755265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.755273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.755524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.755531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.755890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.755898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.756213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.756222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.756599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.756607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.756965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.756974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.757335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.757343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.757703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.757712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.758070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.758078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.758442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.758450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.758814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.758823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.759015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.759023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.759248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.759256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.759619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.759628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.759983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.759991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.760373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.760382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.760726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.760734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.761095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.761103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.761569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.761577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.761927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.761935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.762407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.762436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.762845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.762855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.763218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.763227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.763667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.763676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.763887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.763895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.764255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.764263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.764641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.764649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.765033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.621 [2024-06-10 10:54:36.765043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.621 qpair failed and we were unable to recover it. 00:29:12.621 [2024-06-10 10:54:36.765407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.765415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.765766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.765774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.766001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.766008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.766231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.766239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.766315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.766322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.766569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.766577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.766796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.766804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.767160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.767168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.767509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.767518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.767868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.767876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.768238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.768249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.768462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.768471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.768810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.768818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.769180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.769188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.769381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.769389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.769441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.769447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.769821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.769829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.770208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.770215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.770586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.770594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.770904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.770911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.771140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.771148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.771502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.771511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.771877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.771885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.772267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.772276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.772636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.772645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.772907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.772915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.773277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.773285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.773609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.773619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.773814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.773823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.774171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.774178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.774234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.774240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.774614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.774622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.774982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.774990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.775352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.775361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.775745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.775753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.775815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.775823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.776147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.776155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.622 [2024-06-10 10:54:36.776574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.622 [2024-06-10 10:54:36.776582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.622 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.776957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.776965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.777311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.777319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.777589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.777597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.777979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.777987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.778369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.778377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.778587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.778594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.778815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.778824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.779228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.779235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.779596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.779604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.779657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.779663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.779987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.779995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.780351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.780359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.780717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.780725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.780988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.780996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.781384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.781392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.781760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.781768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.781830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.781836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.782173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.782181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.782574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.782582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.782797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.782805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.783165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.783172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.783534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.783542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.783904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.783911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.784296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.784305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.784678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.784686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.785048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.785056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.785430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.785439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.785662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.785670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.785946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.785954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.786306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.786317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.786702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.786711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.786931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.786939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.623 [2024-06-10 10:54:36.787169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.623 [2024-06-10 10:54:36.787177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.623 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.787389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.787397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.787575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.787582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.787958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.787967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.788329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.788337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.788706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.788714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.789077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.789085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.789465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.789474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.789701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.789709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.789992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.790000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.790358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.790366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.790736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.790744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.791102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.791110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.791404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.791411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.791567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.791577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.791941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.791949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.792305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.792313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.792673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.792682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.793045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.793053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.793258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.793266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.793524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.793532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.793889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.793898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.794163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.794171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.794428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.794436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.794810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.794817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.795188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.795196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.795569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.795577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.795772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.795780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.796114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.624 [2024-06-10 10:54:36.796123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.624 qpair failed and we were unable to recover it. 00:29:12.624 [2024-06-10 10:54:36.796505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.796514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.796735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.796742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.796962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.796970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.797334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.797345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.797713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.797721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.797949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.797957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.798151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.798160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.798384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.798392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.798763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.798772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.799131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.799140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.799488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.799496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.799689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.799697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.800035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.800044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.800407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.800415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.800637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.800645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.801022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.801030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.801253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.801261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.801468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.801476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.801828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.801836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.802004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.802012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.802399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.802407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.802781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.802789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.803178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.803187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.803575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.803584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.803848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.803857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.804151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.804160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.804340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.804349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.804691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.804699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.805059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.805067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.805428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.805437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.805817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.805825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.806059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.806066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.806336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.806344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.806741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.806749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.807087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.807096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.807454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.807462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.625 qpair failed and we were unable to recover it. 00:29:12.625 [2024-06-10 10:54:36.807725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.625 [2024-06-10 10:54:36.807732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.808048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.808057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.808391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.808399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.808786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.808795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.809140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.809149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.809342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.809352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.809694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.809702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.809933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.809941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.810301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.810309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.810673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.810681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.810902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.810909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.811132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.811139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.811523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.811532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.811738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.811745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.811976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.811984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.812065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.812073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.812372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.812381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.812599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.812606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.812946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.812953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.813336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.813345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.813735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.813743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.814100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.814107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.814279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.814288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.814494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.814504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.814757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.814765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.814986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.814993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.815341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.815349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.815532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.815539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.815876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.815884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.816265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.816274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.816644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.816653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.816820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.816828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.817195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.817203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.817477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.817485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.817875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.817884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.818256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.818265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.818496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.818504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.626 qpair failed and we were unable to recover it. 00:29:12.626 [2024-06-10 10:54:36.818698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.626 [2024-06-10 10:54:36.818706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.819036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.819045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.819283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.819291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.819648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.819657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.819995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.820003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.820224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.820233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.820613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.820621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.820986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.820995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.821216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.821224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.821468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.821476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.821793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.821801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.822179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.822187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.822617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.822625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.822990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.822999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.823216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.823225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.823583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.823594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.823821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.823829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.824088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.824096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.824331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.824340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.824569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.824576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.824944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.824951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.825334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.825342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.825580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.825587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.825974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.825982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.826203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.826211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.826586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.826594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.826995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.827003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.827354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.827364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.827724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.827733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.828020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.828028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.828227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.828235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.828426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.828434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.828769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.828777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.829143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.829150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.829508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.829516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.627 [2024-06-10 10:54:36.829918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.627 [2024-06-10 10:54:36.829926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.627 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.830140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.830149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.830412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.830420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.830787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.830796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.831002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.831010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.831258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.831266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.831622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.831632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.831908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.831917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.832278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.832287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.832351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.832359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.832579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.832587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.832980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.832988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.833158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.833167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.833421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.833430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.833798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.833805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.833860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.833867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.834201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.834209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.834554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.834562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.834783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.834791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.835154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.835162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.835528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.835537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.835880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.835888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.836249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.836257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.836322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.836330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.836675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.836683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.836764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.836769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.837150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.837159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.837607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.837617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.838002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.838010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.838270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.838278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.838606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.838613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.838778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.838785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.839125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.839133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.839506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.839514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.839855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.839866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.840225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.840233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.840592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.840601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.840962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.628 [2024-06-10 10:54:36.840970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.628 qpair failed and we were unable to recover it. 00:29:12.628 [2024-06-10 10:54:36.841348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.841356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.841590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.841597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.841957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.841964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.842319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.842329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.842533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.842541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.842918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.842927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.843135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.843143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.843377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.843386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.843743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.843753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.844143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.844152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.844593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.844601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.844868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.844876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.845255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.845264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.845649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.845658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.846008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.846016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.846376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.846384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.846751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.846759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.847150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.847159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.847386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.847395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.847769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.847776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.848152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.848161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.848504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.848512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.848871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.848883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.849247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.849256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.849613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.849621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.849828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.849835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.850212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.850220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.850424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.850433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.629 [2024-06-10 10:54:36.850793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.629 [2024-06-10 10:54:36.850802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.629 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.851161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.851169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.851409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.851416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.851586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.851595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.851926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.851934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.851986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.851993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.852347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.852356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.852724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.852732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.853091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.853098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.853435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.853444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.853797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.853804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.854159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.854167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.854533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.854542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.854895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.854903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.855091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.855098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.855461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.855469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.855834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.855842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.856210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.856217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.856552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.856561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.856826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.856834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.857197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.857204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.857393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.857401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.857719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.857727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.857952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.857959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.858012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.858020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.858263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.858272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.858471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.858478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.858825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.858833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.859265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.859273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.859464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.859472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.859815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.859823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.630 [2024-06-10 10:54:36.860181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.630 [2024-06-10 10:54:36.860190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.630 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.860459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.860468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.860832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.860840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.861268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.861278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.861635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.861643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.861997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.862004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.862356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.862365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.862759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.862767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.863134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.863141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.863360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.863368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.863726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.863734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.863939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.863947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.864306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.864314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.864534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.864541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.865001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.865010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.865195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.865203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.865444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.865451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.865810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.865818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.866011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.866020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.866344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.866351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.866406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.866412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.866747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.866755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.867115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.867122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.867473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.867481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.867833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.867842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.868194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.868202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.868406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.868414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.868747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.868755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.868962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.868970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.869338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.869346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.869763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.869770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.870096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.870104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.870327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.870335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.870677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.631 [2024-06-10 10:54:36.870686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.631 qpair failed and we were unable to recover it. 00:29:12.631 [2024-06-10 10:54:36.870894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.870902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.871265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.871274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.871618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.871626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.871787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.871795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.872160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.872168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.872357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.872365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.872603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.872610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.872878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.872886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.873117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.873126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.873505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.873514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.873707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.873714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.874036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.874044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.874403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.874411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.874781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.874789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.875174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.875181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.875541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.875548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.875954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.875963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.876325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.876333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.876563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.876571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.876927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.876935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.877296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.877304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.877660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.877667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.878052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.878060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.632 [2024-06-10 10:54:36.878425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.632 [2024-06-10 10:54:36.878433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.632 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.878786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.878795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.879003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.879012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.879377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.879386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.879748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.879755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.879975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.879983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.880302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.880310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.880661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.880668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.880888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.880896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.881255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.881263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.881618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.881627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.881847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.881856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.882076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.882085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.882450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.882457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.882815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.882824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.883177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.883185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.883439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.883448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.907 qpair failed and we were unable to recover it. 00:29:12.907 [2024-06-10 10:54:36.883669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.907 [2024-06-10 10:54:36.883677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.883857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.883866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.884087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.884096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.884340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.884348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.884699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.884708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.885068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.885076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.885409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.885418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.885780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.885789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.885978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.885987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.886354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.886364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.886725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.886734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.887090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.887099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.887413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.887421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.887655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.887662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.888041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.888048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.888252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.888260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.888585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.888593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.888951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.888959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.889179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.889186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.889527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.889536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.889892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.889899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.890122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.890129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.890484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.890493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.890685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.890692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.891065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.891073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.891294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.891302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.891515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.891524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.891901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.891908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.892135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.892143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.892332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.892341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.892666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.892674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.892895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.892902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.893285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.893293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.893665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.893673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.893867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.893874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.894092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.894099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.894461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.908 [2024-06-10 10:54:36.894469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.908 qpair failed and we were unable to recover it. 00:29:12.908 [2024-06-10 10:54:36.894691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.894699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.894969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.894976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.895360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.895369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.895725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.895734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.896132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.896140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.896509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.896517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.896874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.896882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.897235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.897259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.897621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.897629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.897962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.897971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.898330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.898339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.898700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.898707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.899064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.899074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.899412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.899419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.899778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.899786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.900151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.900159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.900353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.900361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.900574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.900591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.900949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.900958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.901178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.901187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.901547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.901555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.901876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.901884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.902249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.902258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.902619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.902627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.902985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.902993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.903337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.903346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.903712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.903720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.904081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.904089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.904403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.904412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.904757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.904765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.905115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.905122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.905503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.905512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.905950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.905958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.906430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.906459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.906845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.906854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.907041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.907049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.907404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.907413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.909 [2024-06-10 10:54:36.907592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.909 [2024-06-10 10:54:36.907599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.909 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.907973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.907981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.908338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.908346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.908710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.908719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.909096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.909105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.909460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.909469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.909690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.909698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.909916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.909925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.910304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.910312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.910376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.910382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.910722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.910730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.910956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.910964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.911322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.911330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.911545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.911552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.911787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.911795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.912152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.912164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.912448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.912456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.912854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.912861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.913216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.913224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.913574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.913582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.913938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.913946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.914325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.914333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.914708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.914717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.914937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.914944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.915166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.915173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.915531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.915539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.915897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.915905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.916273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.916281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.916644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.916653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.917037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.917044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.917250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.917258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.917497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.917505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.917862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.917871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.918068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.918078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.918459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.918467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.918904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.910 [2024-06-10 10:54:36.918912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.910 qpair failed and we were unable to recover it. 00:29:12.910 [2024-06-10 10:54:36.918977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.918984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.919307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.919315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.919670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.919677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.920026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.920033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.920403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.920412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.920632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.920639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.920838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.920847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.921181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.921189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.921409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.921417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.921761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.921769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.922045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.922052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.922427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.922435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.922790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.922798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.923066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.923074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.923458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.923467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.923834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.923842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.924199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.924207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.924431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.924439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.924535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.924543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.924886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.924894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.925244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.925254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.925477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.925485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.925825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.925833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.926031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.926040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.926220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.926227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.926552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.926560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.926971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.926980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.927328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.927337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.927561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.927569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.927776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.927785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.928178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.928187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.928564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.928573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.928947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.928955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.929016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.929022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.929201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.929208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.929592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.929600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.929959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.911 [2024-06-10 10:54:36.929967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.911 qpair failed and we were unable to recover it. 00:29:12.911 [2024-06-10 10:54:36.930325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.930334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.930385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.930392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.930744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.930752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.931110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.931118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.931503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.931512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.931774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.931782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.932008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.932016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.932373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.932381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.932744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.932751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.933110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.933121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.933504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.933512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.933869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.933878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.934235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.934247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.934622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.934629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.934857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.934866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.935233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.935241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.935627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.935635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.935994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.936003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.936227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.936235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.936585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.936593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.936906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.936914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.937112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.937120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.937502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.937511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.937713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.937721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.938033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.938041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.938406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.938415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.938794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.912 [2024-06-10 10:54:36.938802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.912 qpair failed and we were unable to recover it. 00:29:12.912 [2024-06-10 10:54:36.939161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.939169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.939485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.939492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.939832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.939840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.940217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.940225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.940450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.940458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.940679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.940688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.941053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.941061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.941455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.941463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.941703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.941711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.942078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.942086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.942446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.942455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.942793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.942801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.943022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.943029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.943418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.943426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.943781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.943788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.944013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.944020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.944213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.944223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.944459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.944467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.944830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.944838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.945188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.945196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.945623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.945631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.945979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.945987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.946259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.946268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.946445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.946454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.946798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.946806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.947170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.947179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.947351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.947359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.947716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.947724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.947946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.947954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.948319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.948327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.948687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.948696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.948893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.948901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.949213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.913 [2024-06-10 10:54:36.949222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.913 qpair failed and we were unable to recover it. 00:29:12.913 [2024-06-10 10:54:36.949583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.949592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.949946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.949954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.950315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.950324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.950594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.950602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.950968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.950977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.951337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.951345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.951701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.951709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.952088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.952095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.952458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.952467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.952702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.952710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.953091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.953099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.953462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.953471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.953829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.953837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.954040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.954049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.954379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.954387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.954623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.954630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.954998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.955005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.955367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.955376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.955754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.955763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.955984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.955992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.956360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.956370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.956750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.956758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.957145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.957152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.957335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.957344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.957582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.957591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.957950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.957958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.958155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.958162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.958573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.958581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.958947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.958955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.959159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.959169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.959513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.959522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.959879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.959888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.960228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.960235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.914 [2024-06-10 10:54:36.960597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.914 [2024-06-10 10:54:36.960605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.914 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.960827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.960834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.961082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.961090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.961311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.961319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.961690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.961699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.962082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.962092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.962455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.962463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.962831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.962839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.963026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.963034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.963378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.963386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.963611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.963619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.963996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.964004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.964362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.964371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.964574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.964581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.964768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.964775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.964989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.964996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.965324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.965334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.965695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.965703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.965923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.965930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.966126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.966135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.966311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.966320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.966640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.966648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.966869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.966877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.967088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.967096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.967337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.967345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.967734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.967742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.968097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.968106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.968464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.968473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.968693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.968701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.969082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.969091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.969365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.969373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.969737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.969746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.970107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.970115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.970337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.970345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.970721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.970729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.971086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.971093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.915 [2024-06-10 10:54:36.971314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.915 [2024-06-10 10:54:36.971324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.915 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.971718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.971726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.972086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.972093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.972452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.972461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.972817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.972825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.973207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.973214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.973582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.973591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.974011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.974018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.974359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.974369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.974749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.974757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.975117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.975125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.975353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.975361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.975731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.975739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.975930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.975938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.976288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.976297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.976514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.976522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.976933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.976941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.977328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.977336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.977716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.977725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.977774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.977781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.978130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.978138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.978522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.978530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.978881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.978890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.979240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.979253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.979626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.979633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.979924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.979932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.980162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.980169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.980584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.980594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.980957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.980966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.981266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.981275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.981628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.981636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.981999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.982007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.982361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.982370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.982727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.982736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.982955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.982963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.983325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.983334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.983726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.983735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.916 [2024-06-10 10:54:36.984098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.916 [2024-06-10 10:54:36.984106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.916 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.984463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.984471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.984843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.984851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.985202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.985214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.985576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.985584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.985847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.985855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.986075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.986083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.986147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.986153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.986480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.986488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.986838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.986846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.987065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.987075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.987439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.987447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.987810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.987818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.988169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.988178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.988371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.988378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.988692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.988700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.989058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.989067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.989328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.989336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.989717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.989727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.989951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.989959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.990172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.990179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.990557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.990566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.990952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.990960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.991183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.991190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.991549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.991558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.991917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.991925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.992306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.992315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.992392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.992399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.992575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.992583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.992803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.992811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.993164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.993173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.917 [2024-06-10 10:54:36.993394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.917 [2024-06-10 10:54:36.993403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.917 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.993770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.993778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.994094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.994102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.994448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.994456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.994722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.994729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.994930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.994938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.995266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.995274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.995642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.995650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.995869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.995876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.996178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.996186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.996380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.996388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.996589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.996597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.996646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.996655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.996867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.996875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.997254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.997262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.997629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.997637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.997997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.998005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.998374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.998383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.998743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.998752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.998812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.998819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.999193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.999201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.999551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.999559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:36.999941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:36.999949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.000307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.000315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.000654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.000663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.000857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.000864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.001066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.001073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.001324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.001332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.001556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.001565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.001932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.001941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.002129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.002138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.002504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.002512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.002574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.002580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.002911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.002919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.003007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.003015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.003359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.918 [2024-06-10 10:54:37.003367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.918 qpair failed and we were unable to recover it. 00:29:12.918 [2024-06-10 10:54:37.003730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.003738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.003961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.003969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.004287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.004295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.004684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.004693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.004913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.004921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.005123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.005131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.005444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.005452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.005512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.005518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.005847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.005855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.006211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.006219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.006477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.006486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.006713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.006720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.007101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.007109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.007322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.007330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.007707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.007715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.007920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.007927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.008155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.008165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.008517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.008526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.008831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.008839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.009034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.009042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.009390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.009399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.009755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.009763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.009986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.009994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.010309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.010317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.010539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.010546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.010908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.010916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.011353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.011361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.011716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.011724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.012102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.012110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.012375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.012384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.012756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.012764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.012984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.012991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.013378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.919 [2024-06-10 10:54:37.013386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.919 qpair failed and we were unable to recover it. 00:29:12.919 [2024-06-10 10:54:37.013743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.013751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.014001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.014008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.014226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.014235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.014445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.014454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.014809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.014817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.015173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.015181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.015550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.015559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.015941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.015949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.016198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.016206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.016571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.016580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.016770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.016778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.016985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.016994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.017365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.017374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.017737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.017746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.017938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.017947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.018298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.018308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.018666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.018674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.018894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.018902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.019262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.019271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.019603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.019611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.019978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.019986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.020209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.020217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.020570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.020578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.020914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.020925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.021283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.021291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.021648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.021656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.022014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.022022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.022218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.022225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.022600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.022608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.022821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.022829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.023064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.023073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.023454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.023462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.023690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.023698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.920 [2024-06-10 10:54:37.023899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.920 [2024-06-10 10:54:37.023907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.920 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.024073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.024081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.024456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.024464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.024826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.024834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.025191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.025199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.025560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.025569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.025914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.025922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.026142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.026149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.026369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.026378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.026748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.026755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.027108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.027115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.027517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.027527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.027747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.027755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.028112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.028121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.028467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.028474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.028844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.028851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.029191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.029199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.029570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.029579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.029803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.029812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.030197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.030205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.030565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.030574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.030931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.030939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.031126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.031134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.031356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.031364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.031571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.031579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.031900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.031907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.032248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.032257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.032588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.032596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.032955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.032964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.033325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.033334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.033724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.033734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.033967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.033974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.034375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.034383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.034743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.034752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.035136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.035144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.035405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.921 [2024-06-10 10:54:37.035412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.921 qpair failed and we were unable to recover it. 00:29:12.921 [2024-06-10 10:54:37.035780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.035788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.036143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.036151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.036528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.036538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.036897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.036906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.037261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.037269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.037487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.037495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.037853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.037861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.038219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.038227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.038582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.038590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.038778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.038786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.039127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.039136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.039367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.039375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.039624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.039631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.040002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.040009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.040391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.040399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.040763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.040773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.041138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.041146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.041372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.041381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.041771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.041779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.042147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.042156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.042220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.042228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.042382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.042392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.042600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.042608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.042821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.042830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.043203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.043212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.043385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.043394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.043612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.043621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.043850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.043859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.044044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.044052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.922 [2024-06-10 10:54:37.044381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.922 [2024-06-10 10:54:37.044391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.922 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.044594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.044602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.044923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.044932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.045154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.045162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.045546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.045555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.045740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.045750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.045971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.045981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.046305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.046313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.046699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.046707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.047073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.047083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.047480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.047488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.047912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.047920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.048110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.048118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.048497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.048505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.048733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.048741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.049112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.049120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.049357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.049365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.049782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.049790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.050135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.050144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.050524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.050533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.050889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.050898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.051067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.051076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.051297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.051305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.051547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.051555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.051916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.051924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.052289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.052299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.052668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.052676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.053037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.053047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.053407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.053417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.053782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.053791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.054163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.054172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.054542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.054550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.054933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.054942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.055291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.055300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.923 [2024-06-10 10:54:37.055479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.923 [2024-06-10 10:54:37.055488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.923 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.055850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.055859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.056298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.056307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.056551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.056558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.056914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.056923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.057287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.057295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.057671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.057680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.058040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.058048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.058429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.058438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.058780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.058790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.058853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.058861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.059186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.059197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.059571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.059579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.059965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.059974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.060415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.060424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.060790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.060799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.061152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.061160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.061529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.061538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.061973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.061982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.062335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.062344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.062596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.062605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.062989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.062997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.063337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.063346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.063609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.063617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.063933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.063942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.064298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.064306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.064675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.064684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.064744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.064752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.064940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.064949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.065195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.065203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.065561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.065570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.065827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.065835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.066195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.066203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.066561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.066570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.066950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.066958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.067339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.067347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.067518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.067526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.067849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.924 [2024-06-10 10:54:37.067856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.924 qpair failed and we were unable to recover it. 00:29:12.924 [2024-06-10 10:54:37.068249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.068257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.068620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.068628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.068832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.068840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.069054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.069061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.069387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.069396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.069781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.069790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.069905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.069914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.069970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.069978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.070325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.070333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.070549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.070556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.070924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.070933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.071292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.071301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.071626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.071634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.071982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.071992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.072195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.072203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.072391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.072399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.072731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.072740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.073128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.073137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.073496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.073505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.073602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.073609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.073660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.073669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.074018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.074027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.074411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.074419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.074630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.074638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.075006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.075015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.075377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.075386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.075773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.075780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.076022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.076029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.076383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.076391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.076719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.076727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.076951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.076960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.077352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.077361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.077730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.077739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.078098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.078107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.078279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.078287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.078645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.925 [2024-06-10 10:54:37.078653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.925 qpair failed and we were unable to recover it. 00:29:12.925 [2024-06-10 10:54:37.078875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.078883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.079201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.079209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.079457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.079466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.079835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.079844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.080205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.080214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.080461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.080468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.080706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.080714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.080889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.080898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.081315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.081323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.081705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.081714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.082061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.082070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.082305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.082314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.082565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.082573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.082923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.082931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.083124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.083132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.083495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.083503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.083866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.083875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.084235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.084248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.084568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.084576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.084943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.084953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.085194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.085203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.085557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.085567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.085950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.085958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.086350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.086359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.086609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.086617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.087054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.087062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.087404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.087414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.087639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.087647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.088016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.088024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.088371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.088380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.088581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.088589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.088855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.088863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.089225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.089234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.089592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.089601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.089849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.089857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.926 [2024-06-10 10:54:37.090250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.926 [2024-06-10 10:54:37.090259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.926 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.090474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.090482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.090841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.090849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.091197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.091206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.091582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.091590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.091953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.091961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.092328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.092338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.092695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.092704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.093049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.093058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.093312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.093322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.093647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.093655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.094040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.094048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.094277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.094285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.094620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.094629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.094846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.094854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.095063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.095072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.095292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.095301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.095674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.095682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.096046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.096053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.096440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.096448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.096666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.096674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.097030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.097038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.097387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.097395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.097747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.097754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.098018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.098025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.098260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.098268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.098509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.098517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.098694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.927 [2024-06-10 10:54:37.098702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.927 qpair failed and we were unable to recover it. 00:29:12.927 [2024-06-10 10:54:37.099096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.099103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.099327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.099334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.099557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.099565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.099763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.099772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.100154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.100161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.100534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.100543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.100902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.100910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.101299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.101307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.101547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.101554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.101793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.101801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.102163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.102171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.102544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.102553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.102817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.102825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.103059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.103066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.103419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.103427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.103682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.103690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.104076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.104084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.104269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.104277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.104460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.104468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.104709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.104717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.105105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.105113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.105468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.105478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.105875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.105883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.106107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.106114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.106505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.106513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.106875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.106883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.107088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.107096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.107424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.107432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.107657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.107665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.107928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.107936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.108145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.108153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.108489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.108497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.108720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.108727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.108910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.108918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.109287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.109295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.109717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.109725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.110107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.110115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.928 qpair failed and we were unable to recover it. 00:29:12.928 [2024-06-10 10:54:37.110508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.928 [2024-06-10 10:54:37.110516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.110842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.110851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.111232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.111240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.111469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.111477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.111674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.111682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.112027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.112036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.112261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.112270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.112498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.112506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.112726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.112734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.113049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.113057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.113450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.113458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.113687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.113694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.114093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.114101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.114460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.114469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.114817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.114825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.115208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.115216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.115441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.115449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.115818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.115825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.116207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.116215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.116658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.116666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.116728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.116735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.117062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.117070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.117334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.117342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.117697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.117705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.117896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.117905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.117971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.117978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.118198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.118206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.118540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.118549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.118932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.118940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.119306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.119314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.119686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.119695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.120057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.120065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.120491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.120498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.120849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.120857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.121083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.121091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.929 [2024-06-10 10:54:37.121449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.929 [2024-06-10 10:54:37.121458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.929 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.121838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.121846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.122204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.122212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.122474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.122481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.122843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.122851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.123193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.123201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.123564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.123572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.123934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.123942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.124148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.124156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.124525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.124534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.124894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.124903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.125265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.125274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.125490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.125498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.125846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.125854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.126091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.126098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.126269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.126277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.126643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.126650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.127038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.127047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.127417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.127425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.127844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.127852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.128213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.128221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.128585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.128593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.128952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.128961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:12.930 [2024-06-10 10:54:37.129183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.129192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:29:12.930 [2024-06-10 10:54:37.129524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.129533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:12.930 [2024-06-10 10:54:37.129917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.129925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:12.930 [2024-06-10 10:54:37.130154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.130161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.930 [2024-06-10 10:54:37.130359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.130378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.130746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.130757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.131138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.131145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.131487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.131495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.131856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.131862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.132203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.132209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.132331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.132338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.132690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.132697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.132982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.132989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.930 [2024-06-10 10:54:37.133329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.930 [2024-06-10 10:54:37.133336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.930 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.133386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.133393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.133726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.133733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.133952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.133960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.134348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.134356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.134717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.134724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.134963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.134969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.135216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.135223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.135582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.135589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.135784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.135791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.136153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.136159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.136565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.136572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.136935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.136949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.137310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.137317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.137716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.137723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.137782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.137788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.138118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.138126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.138532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.138540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.138737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.138745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.139122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.139131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.139577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.139585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.139844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.139851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.139912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.139918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.140238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.140249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.140615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.140621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.140964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.140971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.141366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.141374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.141758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.141765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.142084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.142091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.142459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.142465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.142742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.142749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.142973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.142982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.143309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.143316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.143707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.143716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.144071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.144078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.931 qpair failed and we were unable to recover it. 00:29:12.931 [2024-06-10 10:54:37.144273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.931 [2024-06-10 10:54:37.144280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.144669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.144676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.145014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.145020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.145256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.145263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.145656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.145664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.145853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.145862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.146216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.146223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.146558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.146566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.146945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.146951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.147294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.147302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.147729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.147736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.147796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.147803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.148128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.148135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.148489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.148497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.148830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.148838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.149100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.149107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.149472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.149479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.149828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.149834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.150204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.150218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.150532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.150540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.150965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.150972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.151124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.151131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.151396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.151404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.151613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.151620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.151813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.151822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.152086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.152093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.152460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.152467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.152901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.152908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.153260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.932 [2024-06-10 10:54:37.153268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.932 qpair failed and we were unable to recover it. 00:29:12.932 [2024-06-10 10:54:37.153661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.153668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.154005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.154012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.154273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.154280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.154522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.154530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.154723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.154729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.155084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.155091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.155406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.155412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.155755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.155764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.156118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.156125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.156320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.156327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.156717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.156724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.157082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.157091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.157315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.157323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.157622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.157629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.157998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.158005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.158346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.158354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.158540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.158546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.158748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.158756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.159143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.159150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.159518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.159527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.159783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.159790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.160153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.160165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.160513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.160520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.160829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.160836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.161215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.161222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.161439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.161447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.161817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.161825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.162162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.162169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.162543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.162550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.162773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.162779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.163144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.163152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.163569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.163576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.163925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.163932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.164156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.164163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.933 qpair failed and we were unable to recover it. 00:29:12.933 [2024-06-10 10:54:37.164383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.933 [2024-06-10 10:54:37.164390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.164733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.164741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.164962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.164968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.165315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.165323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.165545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.165554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.165900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.165908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.166291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.166298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.166535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.166541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.166926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.166934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.167279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.167287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.167662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.167669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.167933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.167940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.168279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.168285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.168625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.168634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.168894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.168900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.169115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.169122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.169459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.169466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.934 [2024-06-10 10:54:37.169806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.169814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:12.934 [2024-06-10 10:54:37.170169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.170177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.934 [2024-06-10 10:54:37.170415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.170424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.934 [2024-06-10 10:54:37.170783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.170791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.171136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.171143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.171323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.171330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.171734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.171741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.171945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.171952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.172327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.172334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.172531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.172537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.172909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.172916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.173099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.173107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.173438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.173445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.173832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.173839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.174188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.174195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.174422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.174429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.174788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.934 [2024-06-10 10:54:37.174796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.934 qpair failed and we were unable to recover it. 00:29:12.934 [2024-06-10 10:54:37.175186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.175193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.175552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.175560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.175923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.175930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.176289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.176296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.176630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.176638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.176848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.176854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.177176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.177183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.177545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.177552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.177750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.177757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.178127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.178133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.178510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.178516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.178832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.178838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.179053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.179059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.179427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.179434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.179793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.179800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.180160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.180167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:12.935 [2024-06-10 10:54:37.180520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.935 [2024-06-10 10:54:37.180528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:12.935 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.180954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.180961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.181298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.181306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.181671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.181678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.181903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.181910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.182253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.182260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.182519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.182527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.182851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.182857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.183219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.183227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.183479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.183488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.183836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.183845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.184042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.184050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.184380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.184387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.184759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.184766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 Malloc0 00:29:13.200 [2024-06-10 10:54:37.185167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.185180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.185387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.185394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.185723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.185729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.200 [2024-06-10 10:54:37.186094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.186102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:13.200 [2024-06-10 10:54:37.186463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.186471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.200 [2024-06-10 10:54:37.186686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.186693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.200 [2024-06-10 10:54:37.186935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.186942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.187745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.187765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.188099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.188107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.200 [2024-06-10 10:54:37.188459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.200 [2024-06-10 10:54:37.188467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.200 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.188845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.188852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.188868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.201 [2024-06-10 10:54:37.189191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.189198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.189556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.189567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.189922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.189930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.190335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.190343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.190724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.190732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.190932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.190939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.191292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.191301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.191646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.191656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.192014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.192022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.192382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.192391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.192756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.192764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.193112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.193120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.193505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.193513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.193884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.193892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.194249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.194258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.194504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.194512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.194720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.194728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.195105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.195114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.195460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.195469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.195847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.195855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.196214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.196222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.196572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.196580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.196842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.196852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.197193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.197201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.197556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.197564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.201 [2024-06-10 10:54:37.197789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.197797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.198005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.201 [2024-06-10 10:54:37.198013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.201 [2024-06-10 10:54:37.198402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.198411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.201 [2024-06-10 10:54:37.198642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.198650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.199343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.199360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.199717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.199726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.200099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.200108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.200177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.200185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.200355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.200363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.201 [2024-06-10 10:54:37.200418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.201 [2024-06-10 10:54:37.200425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.201 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.200772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.200779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.201094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.201102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.201361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.201369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.201732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.201741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.202102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.202110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.202336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.202344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.202693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.202702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.202922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.202929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.203285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.203294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.203658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.203666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.204011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.204019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.204378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.204387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.204749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.204757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.205103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.205113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.205462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.205470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.202 [2024-06-10 10:54:37.205817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.205827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:13.202 [2024-06-10 10:54:37.206184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.206192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.202 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.202 [2024-06-10 10:54:37.206556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.206565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.207163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.207179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.207255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.207264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.207618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.207627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.208027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.208036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.208396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.208405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.208584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.208591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.208796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.208804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.209090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.209098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.209454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.209462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.209844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.209852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.210213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.210221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.210557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.210567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.210931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.210939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.211000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.211007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.211352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.211361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.211726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.211734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.212136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.212145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.212338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.212347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.202 qpair failed and we were unable to recover it. 00:29:13.202 [2024-06-10 10:54:37.212698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.202 [2024-06-10 10:54:37.212706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.213069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.213081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.213439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.213447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.203 [2024-06-10 10:54:37.213805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.213814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.203 [2024-06-10 10:54:37.214235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.214248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.203 [2024-06-10 10:54:37.214607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.214618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.215303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.215321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.215728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.215737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.216152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.216161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.216533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.216542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.216847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.216855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.216898] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:13.203 [2024-06-10 10:54:37.217076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.203 [2024-06-10 10:54:37.217084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b5c000b90 with addr=10.0.0.2, port=4420 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.217127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.203 [2024-06-10 10:54:37.219486] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.203 [2024-06-10 10:54:37.219608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.203 [2024-06-10 10:54:37.219621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.203 [2024-06-10 10:54:37.219628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.203 [2024-06-10 10:54:37.219633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.203 [2024-06-10 10:54:37.219647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.203 [2024-06-10 10:54:37.229416] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.203 [2024-06-10 10:54:37.229480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.203 [2024-06-10 10:54:37.229495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.203 [2024-06-10 10:54:37.229500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.203 [2024-06-10 10:54:37.229505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.203 [2024-06-10 10:54:37.229516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.203 10:54:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1024287 00:29:13.203 [2024-06-10 10:54:37.239358] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.203 [2024-06-10 10:54:37.239434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.203 [2024-06-10 10:54:37.239446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.203 [2024-06-10 10:54:37.239452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.203 [2024-06-10 10:54:37.239457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.203 [2024-06-10 10:54:37.239468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.249311] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.203 [2024-06-10 10:54:37.249382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.203 [2024-06-10 10:54:37.249395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.203 [2024-06-10 10:54:37.249400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.203 [2024-06-10 10:54:37.249404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.203 [2024-06-10 10:54:37.249415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.259428] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.203 [2024-06-10 10:54:37.259494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.203 [2024-06-10 10:54:37.259506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.203 [2024-06-10 10:54:37.259511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.203 [2024-06-10 10:54:37.259516] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.203 [2024-06-10 10:54:37.259527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.269474] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.203 [2024-06-10 10:54:37.269534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.203 [2024-06-10 10:54:37.269548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.203 [2024-06-10 10:54:37.269553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.203 [2024-06-10 10:54:37.269558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.203 [2024-06-10 10:54:37.269568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.279504] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.203 [2024-06-10 10:54:37.279563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.203 [2024-06-10 10:54:37.279575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.203 [2024-06-10 10:54:37.279580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.203 [2024-06-10 10:54:37.279585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.203 [2024-06-10 10:54:37.279596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.203 qpair failed and we were unable to recover it. 00:29:13.203 [2024-06-10 10:54:37.289530] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.203 [2024-06-10 10:54:37.289626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.203 [2024-06-10 10:54:37.289638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.203 [2024-06-10 10:54:37.289644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.289648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.289659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.299491] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.299563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.299575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.299581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.299585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.299596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.309484] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.309585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.309597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.309602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.309607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.309618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.319610] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.319677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.319690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.319696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.319700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.319712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.329631] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.329692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.329705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.329710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.329715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.329725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.339575] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.339676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.339688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.339694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.339698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.339709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.349703] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.349762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.349774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.349779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.349784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.349794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.359741] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.359804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.359818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.359823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.359828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.359838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.369718] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.369777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.369789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.369794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.369798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.369809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.379675] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.379738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.379751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.379756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.379760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.379771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.389797] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.389855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.389867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.204 [2024-06-10 10:54:37.389872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.204 [2024-06-10 10:54:37.389876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.204 [2024-06-10 10:54:37.389887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.204 qpair failed and we were unable to recover it. 00:29:13.204 [2024-06-10 10:54:37.399806] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.204 [2024-06-10 10:54:37.399869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.204 [2024-06-10 10:54:37.399880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.205 [2024-06-10 10:54:37.399885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.205 [2024-06-10 10:54:37.399890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.205 [2024-06-10 10:54:37.399905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-06-10 10:54:37.409818] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.205 [2024-06-10 10:54:37.409892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.205 [2024-06-10 10:54:37.409904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.205 [2024-06-10 10:54:37.409909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.205 [2024-06-10 10:54:37.409914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.205 [2024-06-10 10:54:37.409924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-06-10 10:54:37.419885] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.205 [2024-06-10 10:54:37.419974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.205 [2024-06-10 10:54:37.419993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.205 [2024-06-10 10:54:37.420000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.205 [2024-06-10 10:54:37.420005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.205 [2024-06-10 10:54:37.420019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-06-10 10:54:37.429914] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.205 [2024-06-10 10:54:37.430014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.205 [2024-06-10 10:54:37.430033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.205 [2024-06-10 10:54:37.430040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.205 [2024-06-10 10:54:37.430045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.205 [2024-06-10 10:54:37.430058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-06-10 10:54:37.439949] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.205 [2024-06-10 10:54:37.440047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.205 [2024-06-10 10:54:37.440066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.205 [2024-06-10 10:54:37.440072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.205 [2024-06-10 10:54:37.440077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.205 [2024-06-10 10:54:37.440091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-06-10 10:54:37.449968] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.205 [2024-06-10 10:54:37.450032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.205 [2024-06-10 10:54:37.450055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.205 [2024-06-10 10:54:37.450061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.205 [2024-06-10 10:54:37.450066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.205 [2024-06-10 10:54:37.450080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-06-10 10:54:37.460020] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.205 [2024-06-10 10:54:37.460087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.205 [2024-06-10 10:54:37.460106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.205 [2024-06-10 10:54:37.460112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.205 [2024-06-10 10:54:37.460117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.205 [2024-06-10 10:54:37.460131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-06-10 10:54:37.470098] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.205 [2024-06-10 10:54:37.470164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.205 [2024-06-10 10:54:37.470177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.205 [2024-06-10 10:54:37.470183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.205 [2024-06-10 10:54:37.470188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.205 [2024-06-10 10:54:37.470199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.205 [2024-06-10 10:54:37.479950] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.205 [2024-06-10 10:54:37.480010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.205 [2024-06-10 10:54:37.480022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.205 [2024-06-10 10:54:37.480027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.205 [2024-06-10 10:54:37.480032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.205 [2024-06-10 10:54:37.480043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.205 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.490052] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.490116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.490129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.490134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.490142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.490153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.500127] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.500194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.500206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.500212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.500216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.500227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.510105] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.510185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.510197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.510202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.510207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.510218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.520284] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.520357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.520369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.520374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.520379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.520391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.530225] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.530287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.530300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.530306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.530310] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.530321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.540125] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.540269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.540281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.540287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.540291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.540301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.550152] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.550215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.550227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.550232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.550236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.550250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.560254] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.560314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.560326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.560331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.560335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.560346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.570276] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.570337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.570349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.570354] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.570359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.570369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.580309] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.580474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.580486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.580494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.580498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.580509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.590251] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.590312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.590324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.590330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.590334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.590345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.600396] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.600485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.600496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.600502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.600506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.600517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.468 qpair failed and we were unable to recover it. 00:29:13.468 [2024-06-10 10:54:37.610427] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.468 [2024-06-10 10:54:37.610495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.468 [2024-06-10 10:54:37.610508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.468 [2024-06-10 10:54:37.610513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.468 [2024-06-10 10:54:37.610517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.468 [2024-06-10 10:54:37.610528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.620310] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.620369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.620381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.620386] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.620391] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.620401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.630446] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.630505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.630517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.630522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.630526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.630536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.640374] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.640433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.640445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.640450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.640454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.640465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.650560] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.650645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.650656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.650662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.650667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.650677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.660538] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.660603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.660615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.660620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.660624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.660634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.670557] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.670615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.670627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.670635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.670640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.670650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.680633] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.680710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.680722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.680727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.680732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.680743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.690622] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.690686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.690698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.690704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.690708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.690718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.700650] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.700716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.700727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.700732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.700737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.700748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.710674] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.710726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.710738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.710743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.710748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.710759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.720714] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.720771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.720783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.720788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.720792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.720803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.730751] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.730811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.730823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.730828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.730833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.730843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.740817] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.740878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.740890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.469 [2024-06-10 10:54:37.740895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.469 [2024-06-10 10:54:37.740899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.469 [2024-06-10 10:54:37.740910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.469 qpair failed and we were unable to recover it. 00:29:13.469 [2024-06-10 10:54:37.750823] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.469 [2024-06-10 10:54:37.750893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.469 [2024-06-10 10:54:37.750905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.470 [2024-06-10 10:54:37.750910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.470 [2024-06-10 10:54:37.750915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.470 [2024-06-10 10:54:37.750925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.470 qpair failed and we were unable to recover it. 00:29:13.731 [2024-06-10 10:54:37.760821] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.731 [2024-06-10 10:54:37.760910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.731 [2024-06-10 10:54:37.760925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.731 [2024-06-10 10:54:37.760930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.731 [2024-06-10 10:54:37.760934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.731 [2024-06-10 10:54:37.760945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.731 qpair failed and we were unable to recover it. 00:29:13.731 [2024-06-10 10:54:37.770863] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.731 [2024-06-10 10:54:37.770923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.731 [2024-06-10 10:54:37.770935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.731 [2024-06-10 10:54:37.770940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.731 [2024-06-10 10:54:37.770944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.731 [2024-06-10 10:54:37.770954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.731 qpair failed and we were unable to recover it. 00:29:13.731 [2024-06-10 10:54:37.780933] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.731 [2024-06-10 10:54:37.781005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.731 [2024-06-10 10:54:37.781024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.731 [2024-06-10 10:54:37.781030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.731 [2024-06-10 10:54:37.781035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.731 [2024-06-10 10:54:37.781050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.731 qpair failed and we were unable to recover it. 00:29:13.731 [2024-06-10 10:54:37.790915] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.731 [2024-06-10 10:54:37.790979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.731 [2024-06-10 10:54:37.790998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.731 [2024-06-10 10:54:37.791004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.731 [2024-06-10 10:54:37.791009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.731 [2024-06-10 10:54:37.791023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.731 qpair failed and we were unable to recover it. 00:29:13.731 [2024-06-10 10:54:37.800980] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.731 [2024-06-10 10:54:37.801039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.731 [2024-06-10 10:54:37.801058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.731 [2024-06-10 10:54:37.801064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.731 [2024-06-10 10:54:37.801069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.731 [2024-06-10 10:54:37.801086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.731 qpair failed and we were unable to recover it. 00:29:13.731 [2024-06-10 10:54:37.810979] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.811068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.811088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.811094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.811099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.811112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.820995] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.821064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.821077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.821083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.821087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.821099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.831010] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.831072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.831086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.831091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.831095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.831109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.841105] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.841211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.841224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.841230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.841235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.841250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.851177] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.851234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.851253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.851258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.851263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.851274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.861108] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.861172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.861184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.861189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.861194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.861204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.871134] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.871189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.871201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.871206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.871211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.871221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.881138] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.881192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.881204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.881209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.881214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.881225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.891209] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.891269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.891281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.891287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.891294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.891304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.901217] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.901285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.901297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.901302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.901307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.901317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.911245] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.911304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.911316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.911321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.911325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.911336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.921278] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.921372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.921384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.921389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.921394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.921404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.931302] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.931361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.931373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.931378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.931382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.931393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.941326] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.941393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.941405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.941410] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.941414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.941425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.951338] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.951397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.951409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.951414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.951418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.951429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.961453] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.961517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.961528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.961533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.961538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.961548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.971416] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.971483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.971496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.971501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.971506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.971519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.981545] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.981609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.981622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.981627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.981634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.981645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:37.991357] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:37.991417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:37.991429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:37.991435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:37.991439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:37.991450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:38.001490] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:38.001549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:38.001561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:38.001566] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:38.001570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:38.001581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.732 [2024-06-10 10:54:38.011543] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.732 [2024-06-10 10:54:38.011605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.732 [2024-06-10 10:54:38.011616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.732 [2024-06-10 10:54:38.011622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.732 [2024-06-10 10:54:38.011626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.732 [2024-06-10 10:54:38.011637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.732 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.021583] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.021645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.021658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.021663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.021668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.021678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.031622] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.031677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.031689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.031695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.031699] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.031709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.041599] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.041654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.041666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.041671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.041676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.041686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.051523] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.051586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.051598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.051603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.051607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.051618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.061662] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.061723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.061735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.061740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.061745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.061755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.071686] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.071752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.071763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.071771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.071775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.071786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.081601] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.081662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.081675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.081680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.081684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.081695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.091626] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.091687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.091699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.091704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.091708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.091719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.101799] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.101869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.101880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.101885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.101890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.101900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.111812] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.111866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.111877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.111883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.111887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.111898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.121823] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.121883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.121895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.121900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.121905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.121915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.131775] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.131834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.131847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.131852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.131856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.131867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.141762] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.993 [2024-06-10 10:54:38.141831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.993 [2024-06-10 10:54:38.141843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.993 [2024-06-10 10:54:38.141848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.993 [2024-06-10 10:54:38.141852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.993 [2024-06-10 10:54:38.141862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.993 qpair failed and we were unable to recover it. 00:29:13.993 [2024-06-10 10:54:38.151907] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.151963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.151975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.151980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.151985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.151995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.161934] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.161989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.162006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.162011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.162015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.162026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.171969] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.172049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.172061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.172066] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.172070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.172081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.181993] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.182055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.182067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.182072] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.182077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.182087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.192003] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.192061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.192073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.192078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.192082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.192093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.202055] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.202110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.202121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.202127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.202131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.202144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.212070] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.212130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.212142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.212147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.212152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.212162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.221990] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.222055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.222067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.222073] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.222077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.222088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.232120] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.232192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.232203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.232208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.232212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.232222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.242141] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.242203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.242215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.242220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.242225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.242235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.252101] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.252165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.252180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.252185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.252189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.252199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.262224] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.262287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.262299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.262304] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.262309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.262320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:13.994 [2024-06-10 10:54:38.272246] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.994 [2024-06-10 10:54:38.272308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.994 [2024-06-10 10:54:38.272320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.994 [2024-06-10 10:54:38.272325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.994 [2024-06-10 10:54:38.272329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:13.994 [2024-06-10 10:54:38.272340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.994 qpair failed and we were unable to recover it. 00:29:14.256 [2024-06-10 10:54:38.282272] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.256 [2024-06-10 10:54:38.282331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.256 [2024-06-10 10:54:38.282343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.256 [2024-06-10 10:54:38.282348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.256 [2024-06-10 10:54:38.282352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.256 [2024-06-10 10:54:38.282363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-06-10 10:54:38.292185] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.256 [2024-06-10 10:54:38.292253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.256 [2024-06-10 10:54:38.292266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.256 [2024-06-10 10:54:38.292271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.256 [2024-06-10 10:54:38.292278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.256 [2024-06-10 10:54:38.292289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-06-10 10:54:38.302335] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.256 [2024-06-10 10:54:38.302397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.256 [2024-06-10 10:54:38.302409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.256 [2024-06-10 10:54:38.302414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.256 [2024-06-10 10:54:38.302418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.256 [2024-06-10 10:54:38.302429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-06-10 10:54:38.312390] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.256 [2024-06-10 10:54:38.312454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.256 [2024-06-10 10:54:38.312465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.256 [2024-06-10 10:54:38.312470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.256 [2024-06-10 10:54:38.312475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.256 [2024-06-10 10:54:38.312485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-06-10 10:54:38.322284] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.256 [2024-06-10 10:54:38.322348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.256 [2024-06-10 10:54:38.322361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.256 [2024-06-10 10:54:38.322366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.256 [2024-06-10 10:54:38.322370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.256 [2024-06-10 10:54:38.322381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-06-10 10:54:38.332451] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.256 [2024-06-10 10:54:38.332512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.256 [2024-06-10 10:54:38.332524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.256 [2024-06-10 10:54:38.332529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.256 [2024-06-10 10:54:38.332534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.256 [2024-06-10 10:54:38.332544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-06-10 10:54:38.342462] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.256 [2024-06-10 10:54:38.342528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.256 [2024-06-10 10:54:38.342540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.256 [2024-06-10 10:54:38.342545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.256 [2024-06-10 10:54:38.342550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.256 [2024-06-10 10:54:38.342560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-06-10 10:54:38.352434] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.256 [2024-06-10 10:54:38.352516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.256 [2024-06-10 10:54:38.352529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.256 [2024-06-10 10:54:38.352534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.256 [2024-06-10 10:54:38.352538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.256 [2024-06-10 10:54:38.352549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.256 qpair failed and we were unable to recover it. 00:29:14.256 [2024-06-10 10:54:38.362499] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.256 [2024-06-10 10:54:38.362598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.256 [2024-06-10 10:54:38.362610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.362616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.362621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.362631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.372528] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.372615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.372627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.372632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.372637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.372647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.382559] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.382623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.382635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.382641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.382648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.382658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.392587] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.392675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.392687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.392692] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.392697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.392707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.402490] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.402548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.402561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.402567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.402571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.402582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.412644] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.412704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.412716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.412721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.412726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.412736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.422657] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.422739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.422751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.422756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.422761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.422771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.432698] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.432785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.432797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.432802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.432807] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.432818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.442722] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.442787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.442799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.442804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.442809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.442819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.452817] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.452926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.452939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.452944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.452948] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.452961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.462782] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.462852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.462864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.462869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.462873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.462884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.472812] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.472869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.257 [2024-06-10 10:54:38.472882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.257 [2024-06-10 10:54:38.472889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.257 [2024-06-10 10:54:38.472894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.257 [2024-06-10 10:54:38.472904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.257 qpair failed and we were unable to recover it. 00:29:14.257 [2024-06-10 10:54:38.482847] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.257 [2024-06-10 10:54:38.482900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.258 [2024-06-10 10:54:38.482912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.258 [2024-06-10 10:54:38.482917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.258 [2024-06-10 10:54:38.482921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.258 [2024-06-10 10:54:38.482931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-06-10 10:54:38.492774] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.258 [2024-06-10 10:54:38.492902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.258 [2024-06-10 10:54:38.492915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.258 [2024-06-10 10:54:38.492920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.258 [2024-06-10 10:54:38.492924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.258 [2024-06-10 10:54:38.492934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-06-10 10:54:38.502773] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.258 [2024-06-10 10:54:38.502839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.258 [2024-06-10 10:54:38.502851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.258 [2024-06-10 10:54:38.502856] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.258 [2024-06-10 10:54:38.502861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.258 [2024-06-10 10:54:38.502871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-06-10 10:54:38.512920] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.258 [2024-06-10 10:54:38.512980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.258 [2024-06-10 10:54:38.512992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.258 [2024-06-10 10:54:38.512997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.258 [2024-06-10 10:54:38.513002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.258 [2024-06-10 10:54:38.513012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-06-10 10:54:38.522991] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.258 [2024-06-10 10:54:38.523057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.258 [2024-06-10 10:54:38.523076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.258 [2024-06-10 10:54:38.523082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.258 [2024-06-10 10:54:38.523087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.258 [2024-06-10 10:54:38.523102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.258 [2024-06-10 10:54:38.532986] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.258 [2024-06-10 10:54:38.533064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.258 [2024-06-10 10:54:38.533077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.258 [2024-06-10 10:54:38.533082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.258 [2024-06-10 10:54:38.533087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.258 [2024-06-10 10:54:38.533098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.258 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.542900] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.542969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.542988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.520 [2024-06-10 10:54:38.542994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.520 [2024-06-10 10:54:38.542999] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.520 [2024-06-10 10:54:38.543013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.520 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.553029] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.553162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.553176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.520 [2024-06-10 10:54:38.553181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.520 [2024-06-10 10:54:38.553186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.520 [2024-06-10 10:54:38.553197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.520 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.563122] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.563183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.563199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.520 [2024-06-10 10:54:38.563205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.520 [2024-06-10 10:54:38.563210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.520 [2024-06-10 10:54:38.563222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.520 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.573100] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.573160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.573173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.520 [2024-06-10 10:54:38.573178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.520 [2024-06-10 10:54:38.573183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.520 [2024-06-10 10:54:38.573193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.520 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.583001] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.583106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.583119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.520 [2024-06-10 10:54:38.583125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.520 [2024-06-10 10:54:38.583129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.520 [2024-06-10 10:54:38.583140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.520 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.593117] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.593182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.593195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.520 [2024-06-10 10:54:38.593200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.520 [2024-06-10 10:54:38.593205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.520 [2024-06-10 10:54:38.593215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.520 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.603123] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.603181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.603193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.520 [2024-06-10 10:54:38.603199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.520 [2024-06-10 10:54:38.603203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.520 [2024-06-10 10:54:38.603217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.520 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.613193] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.613338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.613351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.520 [2024-06-10 10:54:38.613357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.520 [2024-06-10 10:54:38.613361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.520 [2024-06-10 10:54:38.613373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.520 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.623236] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.623317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.623329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.520 [2024-06-10 10:54:38.623334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.520 [2024-06-10 10:54:38.623339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.520 [2024-06-10 10:54:38.623350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.520 qpair failed and we were unable to recover it. 00:29:14.520 [2024-06-10 10:54:38.633237] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.520 [2024-06-10 10:54:38.633333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.520 [2024-06-10 10:54:38.633346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.633351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.633355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.633367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.643282] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.643344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.643356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.643361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.643366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.643377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.653307] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.653372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.653387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.653392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.653397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.653407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.663331] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.663395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.663407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.663412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.663416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.663427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.673377] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.673436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.673448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.673453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.673458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.673469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.683409] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.683467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.683479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.683485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.683489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.683499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.693432] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.693490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.693501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.693507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.693512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.693525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.703516] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.703582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.703594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.703600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.703604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.703617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.713497] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.713553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.713566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.713571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.713575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.713586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.723534] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.723600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.723612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.723617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.723622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.723632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.733549] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.733608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.733621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.733626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.733630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.733641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.743558] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.743626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.521 [2024-06-10 10:54:38.743637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.521 [2024-06-10 10:54:38.743642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.521 [2024-06-10 10:54:38.743647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.521 [2024-06-10 10:54:38.743657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.521 qpair failed and we were unable to recover it. 00:29:14.521 [2024-06-10 10:54:38.753590] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.521 [2024-06-10 10:54:38.753648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.522 [2024-06-10 10:54:38.753660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.522 [2024-06-10 10:54:38.753665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.522 [2024-06-10 10:54:38.753669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.522 [2024-06-10 10:54:38.753679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.522 qpair failed and we were unable to recover it. 00:29:14.522 [2024-06-10 10:54:38.763628] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.522 [2024-06-10 10:54:38.763684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.522 [2024-06-10 10:54:38.763695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.522 [2024-06-10 10:54:38.763700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.522 [2024-06-10 10:54:38.763705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.522 [2024-06-10 10:54:38.763715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.522 qpair failed and we were unable to recover it. 00:29:14.522 [2024-06-10 10:54:38.773703] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.522 [2024-06-10 10:54:38.773766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.522 [2024-06-10 10:54:38.773778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.522 [2024-06-10 10:54:38.773783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.522 [2024-06-10 10:54:38.773788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.522 [2024-06-10 10:54:38.773798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.522 qpair failed and we were unable to recover it. 00:29:14.522 [2024-06-10 10:54:38.783563] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.522 [2024-06-10 10:54:38.783630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.522 [2024-06-10 10:54:38.783642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.522 [2024-06-10 10:54:38.783647] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.522 [2024-06-10 10:54:38.783654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.522 [2024-06-10 10:54:38.783664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.522 qpair failed and we were unable to recover it. 00:29:14.522 [2024-06-10 10:54:38.793691] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.522 [2024-06-10 10:54:38.793746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.522 [2024-06-10 10:54:38.793758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.522 [2024-06-10 10:54:38.793764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.522 [2024-06-10 10:54:38.793768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.522 [2024-06-10 10:54:38.793778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.522 qpair failed and we were unable to recover it. 00:29:14.522 [2024-06-10 10:54:38.803731] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.522 [2024-06-10 10:54:38.803789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.522 [2024-06-10 10:54:38.803801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.522 [2024-06-10 10:54:38.803805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.522 [2024-06-10 10:54:38.803810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.522 [2024-06-10 10:54:38.803820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.522 qpair failed and we were unable to recover it. 00:29:14.784 [2024-06-10 10:54:38.813775] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.784 [2024-06-10 10:54:38.813833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.784 [2024-06-10 10:54:38.813845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.784 [2024-06-10 10:54:38.813851] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.784 [2024-06-10 10:54:38.813855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.784 [2024-06-10 10:54:38.813865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-06-10 10:54:38.823807] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.784 [2024-06-10 10:54:38.823873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.784 [2024-06-10 10:54:38.823885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.784 [2024-06-10 10:54:38.823891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.784 [2024-06-10 10:54:38.823895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.784 [2024-06-10 10:54:38.823906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-06-10 10:54:38.833830] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.784 [2024-06-10 10:54:38.833884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.784 [2024-06-10 10:54:38.833896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.784 [2024-06-10 10:54:38.833901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.784 [2024-06-10 10:54:38.833906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.784 [2024-06-10 10:54:38.833916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-06-10 10:54:38.843740] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.784 [2024-06-10 10:54:38.843796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.784 [2024-06-10 10:54:38.843807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.784 [2024-06-10 10:54:38.843813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.784 [2024-06-10 10:54:38.843817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.784 [2024-06-10 10:54:38.843828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.784 qpair failed and we were unable to recover it. 00:29:14.784 [2024-06-10 10:54:38.853892] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.784 [2024-06-10 10:54:38.853974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.784 [2024-06-10 10:54:38.853986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.853991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.853996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.854007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.863924] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.863998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.864017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.864023] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.864027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.864042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.873843] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.873906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.873925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.873934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.873939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.873953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.883969] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.884030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.884049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.884055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.884060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.884074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.893990] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.894051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.894065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.894071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.894075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.894089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.904007] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.904073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.904086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.904091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.904096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.904107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.914056] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.914112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.914124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.914129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.914133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.914144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.924072] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.924128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.924140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.924145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.924150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.924161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.934210] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.934273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.934285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.934290] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.934295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.934306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.944133] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.944199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.944210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.944215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.944220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.944231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.954153] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.954215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.954227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.954232] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.954237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.954251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.964218] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.964313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.964329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.964334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.964339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.964350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.974229] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.974295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.974307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.974312] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.974317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.974328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.785 [2024-06-10 10:54:38.984244] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.785 [2024-06-10 10:54:38.984309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.785 [2024-06-10 10:54:38.984321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.785 [2024-06-10 10:54:38.984326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.785 [2024-06-10 10:54:38.984330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.785 [2024-06-10 10:54:38.984342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.785 qpair failed and we were unable to recover it. 00:29:14.786 [2024-06-10 10:54:38.994271] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.786 [2024-06-10 10:54:38.994328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.786 [2024-06-10 10:54:38.994341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.786 [2024-06-10 10:54:38.994346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.786 [2024-06-10 10:54:38.994350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.786 [2024-06-10 10:54:38.994361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.786 qpair failed and we were unable to recover it. 00:29:14.786 [2024-06-10 10:54:39.004325] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.786 [2024-06-10 10:54:39.004384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.786 [2024-06-10 10:54:39.004396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.786 [2024-06-10 10:54:39.004401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.786 [2024-06-10 10:54:39.004406] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.786 [2024-06-10 10:54:39.004417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.786 qpair failed and we were unable to recover it. 00:29:14.786 [2024-06-10 10:54:39.014328] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.786 [2024-06-10 10:54:39.014387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.786 [2024-06-10 10:54:39.014399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.786 [2024-06-10 10:54:39.014404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.786 [2024-06-10 10:54:39.014409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.786 [2024-06-10 10:54:39.014419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.786 qpair failed and we were unable to recover it. 00:29:14.786 [2024-06-10 10:54:39.024345] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.786 [2024-06-10 10:54:39.024418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.786 [2024-06-10 10:54:39.024431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.786 [2024-06-10 10:54:39.024437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.786 [2024-06-10 10:54:39.024442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.786 [2024-06-10 10:54:39.024456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.786 qpair failed and we were unable to recover it. 00:29:14.786 [2024-06-10 10:54:39.034380] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.786 [2024-06-10 10:54:39.034475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.786 [2024-06-10 10:54:39.034488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.786 [2024-06-10 10:54:39.034493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.786 [2024-06-10 10:54:39.034498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.786 [2024-06-10 10:54:39.034509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.786 qpair failed and we were unable to recover it. 00:29:14.786 [2024-06-10 10:54:39.044300] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.786 [2024-06-10 10:54:39.044362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.786 [2024-06-10 10:54:39.044374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.786 [2024-06-10 10:54:39.044379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.786 [2024-06-10 10:54:39.044384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.786 [2024-06-10 10:54:39.044394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.786 qpair failed and we were unable to recover it. 00:29:14.786 [2024-06-10 10:54:39.054457] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.786 [2024-06-10 10:54:39.054515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.786 [2024-06-10 10:54:39.054530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.786 [2024-06-10 10:54:39.054536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.786 [2024-06-10 10:54:39.054540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.786 [2024-06-10 10:54:39.054551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.786 qpair failed and we were unable to recover it. 00:29:14.786 [2024-06-10 10:54:39.064481] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.786 [2024-06-10 10:54:39.064550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.786 [2024-06-10 10:54:39.064562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.786 [2024-06-10 10:54:39.064567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.786 [2024-06-10 10:54:39.064572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:14.786 [2024-06-10 10:54:39.064582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:14.786 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.074508] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.074571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.074583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.074588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.074593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.074604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.084548] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.084605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.084617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.084622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.084626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.084637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.094448] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.094508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.094521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.094526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.094530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.094546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.104560] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.104623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.104636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.104641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.104645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.104657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.114613] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.114681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.114695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.114700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.114705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.114716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.124609] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.124685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.124697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.124703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.124707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.124718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.134671] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.134734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.134746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.134751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.134756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.134767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.144685] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.144754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.144768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.144773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.144778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.144788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.154741] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.154794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.154806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.154811] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.154815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.154826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.164748] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.164811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.164822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.164828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.164833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.164843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.174766] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.174826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.174837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.174843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.174847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.174858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.184771] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.184838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.184850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.184855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.049 [2024-06-10 10:54:39.184862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.049 [2024-06-10 10:54:39.184873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.049 qpair failed and we were unable to recover it. 00:29:15.049 [2024-06-10 10:54:39.194814] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.049 [2024-06-10 10:54:39.194875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.049 [2024-06-10 10:54:39.194887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.049 [2024-06-10 10:54:39.194892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.194897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.194907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.204913] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.204971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.204983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.204988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.204992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.205003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.214895] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.214958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.214977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.214983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.214988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.215002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.224909] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.224978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.224997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.225004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.225009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.225023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.234934] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.235001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.235015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.235021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.235025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.235037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.244864] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.244923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.244936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.244942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.244946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.244957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.254971] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.255066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.255086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.255092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.255098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.255111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.265017] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.265087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.265106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.265112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.265117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.265131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.275068] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.275124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.275137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.275146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.275150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.275162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.285087] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.285145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.285157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.285163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.285167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.285178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.295125] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.295190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.295202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.295208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.295212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.295223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.305136] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.305204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.305216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.305222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.305226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.305237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.315158] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.315218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.315230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.315237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.315245] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.315256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.050 [2024-06-10 10:54:39.325192] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.050 [2024-06-10 10:54:39.325252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.050 [2024-06-10 10:54:39.325265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.050 [2024-06-10 10:54:39.325270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.050 [2024-06-10 10:54:39.325274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.050 [2024-06-10 10:54:39.325285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.050 qpair failed and we were unable to recover it. 00:29:15.313 [2024-06-10 10:54:39.335222] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.313 [2024-06-10 10:54:39.335282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.313 [2024-06-10 10:54:39.335301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.313 [2024-06-10 10:54:39.335306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.313 [2024-06-10 10:54:39.335311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.313 [2024-06-10 10:54:39.335321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.313 qpair failed and we were unable to recover it. 00:29:15.313 [2024-06-10 10:54:39.345239] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.313 [2024-06-10 10:54:39.345328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.313 [2024-06-10 10:54:39.345341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.313 [2024-06-10 10:54:39.345346] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.313 [2024-06-10 10:54:39.345351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.313 [2024-06-10 10:54:39.345366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.313 qpair failed and we were unable to recover it. 00:29:15.313 [2024-06-10 10:54:39.355287] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.313 [2024-06-10 10:54:39.355345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.313 [2024-06-10 10:54:39.355357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.313 [2024-06-10 10:54:39.355362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.313 [2024-06-10 10:54:39.355366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.313 [2024-06-10 10:54:39.355377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.313 qpair failed and we were unable to recover it. 00:29:15.313 [2024-06-10 10:54:39.365186] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.313 [2024-06-10 10:54:39.365254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.313 [2024-06-10 10:54:39.365266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.313 [2024-06-10 10:54:39.365275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.313 [2024-06-10 10:54:39.365279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.313 [2024-06-10 10:54:39.365290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.313 qpair failed and we were unable to recover it. 00:29:15.313 [2024-06-10 10:54:39.375341] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.313 [2024-06-10 10:54:39.375403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.313 [2024-06-10 10:54:39.375415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.313 [2024-06-10 10:54:39.375420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.313 [2024-06-10 10:54:39.375424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.313 [2024-06-10 10:54:39.375435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.313 qpair failed and we were unable to recover it. 00:29:15.313 [2024-06-10 10:54:39.385345] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.313 [2024-06-10 10:54:39.385410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.313 [2024-06-10 10:54:39.385422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.385427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.385431] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.385442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.395385] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.395444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.395455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.395460] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.395465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.395475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.405299] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.405361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.405374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.405379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.405383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.405395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.415395] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.415456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.415469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.415474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.415478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.415489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.425473] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.425538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.425550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.425556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.425560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.425570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.435492] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.435556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.435567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.435573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.435577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.435587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.445489] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.445544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.445557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.445562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.445566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.445576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.455588] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.455655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.455670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.455675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.455679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.455690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.465611] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.465677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.465689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.465694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.465698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.465709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.475517] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.475620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.475633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.475638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.475643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.475654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.485617] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.485720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.485732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.485738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.485742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.485753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.495656] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.495717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.495729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.495734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.495738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.495751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.505700] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.505765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.505777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.505782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.505786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.505797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.314 [2024-06-10 10:54:39.515712] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.314 [2024-06-10 10:54:39.515766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.314 [2024-06-10 10:54:39.515778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.314 [2024-06-10 10:54:39.515783] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.314 [2024-06-10 10:54:39.515788] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.314 [2024-06-10 10:54:39.515798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.314 qpair failed and we were unable to recover it. 00:29:15.315 [2024-06-10 10:54:39.525851] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-06-10 10:54:39.525917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-06-10 10:54:39.525928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-06-10 10:54:39.525933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-06-10 10:54:39.525938] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.315 [2024-06-10 10:54:39.525948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-06-10 10:54:39.535821] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-06-10 10:54:39.535887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-06-10 10:54:39.535906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-06-10 10:54:39.535912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-06-10 10:54:39.535917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.315 [2024-06-10 10:54:39.535931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-06-10 10:54:39.545842] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-06-10 10:54:39.545906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-06-10 10:54:39.545925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-06-10 10:54:39.545931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-06-10 10:54:39.545935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.315 [2024-06-10 10:54:39.545946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-06-10 10:54:39.555755] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-06-10 10:54:39.555867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-06-10 10:54:39.555880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-06-10 10:54:39.555885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-06-10 10:54:39.555890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.315 [2024-06-10 10:54:39.555901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-06-10 10:54:39.565758] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-06-10 10:54:39.565816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-06-10 10:54:39.565829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-06-10 10:54:39.565834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-06-10 10:54:39.565838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.315 [2024-06-10 10:54:39.565849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-06-10 10:54:39.575895] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-06-10 10:54:39.575955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-06-10 10:54:39.575968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-06-10 10:54:39.575973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-06-10 10:54:39.575977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.315 [2024-06-10 10:54:39.575988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-06-10 10:54:39.585890] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-06-10 10:54:39.585962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-06-10 10:54:39.585980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-06-10 10:54:39.585986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-06-10 10:54:39.585994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.315 [2024-06-10 10:54:39.586009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-06-10 10:54:39.595965] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-06-10 10:54:39.596020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-06-10 10:54:39.596033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-06-10 10:54:39.596039] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-06-10 10:54:39.596043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.315 [2024-06-10 10:54:39.596054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.578 [2024-06-10 10:54:39.605977] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.578 [2024-06-10 10:54:39.606030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.578 [2024-06-10 10:54:39.606042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.578 [2024-06-10 10:54:39.606047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.578 [2024-06-10 10:54:39.606052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.578 [2024-06-10 10:54:39.606062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.578 qpair failed and we were unable to recover it. 00:29:15.578 [2024-06-10 10:54:39.616040] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.578 [2024-06-10 10:54:39.616127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.578 [2024-06-10 10:54:39.616139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.578 [2024-06-10 10:54:39.616145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.578 [2024-06-10 10:54:39.616150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.578 [2024-06-10 10:54:39.616160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.578 qpair failed and we were unable to recover it. 00:29:15.578 [2024-06-10 10:54:39.625919] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.578 [2024-06-10 10:54:39.625982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.578 [2024-06-10 10:54:39.625994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.578 [2024-06-10 10:54:39.626000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.578 [2024-06-10 10:54:39.626004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.578 [2024-06-10 10:54:39.626015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.578 qpair failed and we were unable to recover it. 00:29:15.578 [2024-06-10 10:54:39.636017] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.578 [2024-06-10 10:54:39.636114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.578 [2024-06-10 10:54:39.636126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.578 [2024-06-10 10:54:39.636131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.578 [2024-06-10 10:54:39.636136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.578 [2024-06-10 10:54:39.636147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.578 qpair failed and we were unable to recover it. 00:29:15.578 [2024-06-10 10:54:39.646052] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.578 [2024-06-10 10:54:39.646150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.578 [2024-06-10 10:54:39.646163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.578 [2024-06-10 10:54:39.646168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.578 [2024-06-10 10:54:39.646173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.578 [2024-06-10 10:54:39.646183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.578 qpair failed and we were unable to recover it. 00:29:15.578 [2024-06-10 10:54:39.656128] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.578 [2024-06-10 10:54:39.656187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.578 [2024-06-10 10:54:39.656199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.578 [2024-06-10 10:54:39.656204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.578 [2024-06-10 10:54:39.656209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.578 [2024-06-10 10:54:39.656219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.578 qpair failed and we were unable to recover it. 00:29:15.578 [2024-06-10 10:54:39.666100] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.578 [2024-06-10 10:54:39.666159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.578 [2024-06-10 10:54:39.666171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.578 [2024-06-10 10:54:39.666176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.578 [2024-06-10 10:54:39.666181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.578 [2024-06-10 10:54:39.666191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.578 qpair failed and we were unable to recover it. 00:29:15.578 [2024-06-10 10:54:39.676129] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.578 [2024-06-10 10:54:39.676180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.578 [2024-06-10 10:54:39.676192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.578 [2024-06-10 10:54:39.676197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.578 [2024-06-10 10:54:39.676204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.578 [2024-06-10 10:54:39.676215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.578 qpair failed and we were unable to recover it. 00:29:15.578 [2024-06-10 10:54:39.686118] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.578 [2024-06-10 10:54:39.686175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.686187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.686193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.686198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.686208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.696232] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.696293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.696305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.696310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.696314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.696325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.706264] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.706338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.706350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.706355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.706360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.706370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.716111] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.716169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.716181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.716186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.716191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.716201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.726262] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.726318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.726330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.726335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.726339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.726350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.736396] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.736456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.736468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.736473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.736477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.736488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.746374] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.746445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.746456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.746461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.746466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.746476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.756389] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.756469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.756480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.756486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.756491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.756502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.766275] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.766327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.766339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.766347] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.766351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.766362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.776453] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.776532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.776544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.776549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.776554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.776564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.786445] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.786505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.786517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.786522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.786526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.786537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.796334] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.796388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.796399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.796404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.796409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.796419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.806465] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.806526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.806537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.806542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.806547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.806557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.579 qpair failed and we were unable to recover it. 00:29:15.579 [2024-06-10 10:54:39.816617] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.579 [2024-06-10 10:54:39.816722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.579 [2024-06-10 10:54:39.816734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.579 [2024-06-10 10:54:39.816739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.579 [2024-06-10 10:54:39.816744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.579 [2024-06-10 10:54:39.816754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.580 qpair failed and we were unable to recover it. 00:29:15.580 [2024-06-10 10:54:39.826536] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.580 [2024-06-10 10:54:39.826594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.580 [2024-06-10 10:54:39.826606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.580 [2024-06-10 10:54:39.826611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.580 [2024-06-10 10:54:39.826616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.580 [2024-06-10 10:54:39.826626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.580 qpair failed and we were unable to recover it. 00:29:15.580 [2024-06-10 10:54:39.836570] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.580 [2024-06-10 10:54:39.836624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.580 [2024-06-10 10:54:39.836636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.580 [2024-06-10 10:54:39.836641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.580 [2024-06-10 10:54:39.836646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.580 [2024-06-10 10:54:39.836656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.580 qpair failed and we were unable to recover it. 00:29:15.580 [2024-06-10 10:54:39.846593] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.580 [2024-06-10 10:54:39.846652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.580 [2024-06-10 10:54:39.846664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.580 [2024-06-10 10:54:39.846669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.580 [2024-06-10 10:54:39.846673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.580 [2024-06-10 10:54:39.846685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.580 qpair failed and we were unable to recover it. 00:29:15.580 [2024-06-10 10:54:39.856681] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.580 [2024-06-10 10:54:39.856739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.580 [2024-06-10 10:54:39.856753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.580 [2024-06-10 10:54:39.856758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.580 [2024-06-10 10:54:39.856763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.580 [2024-06-10 10:54:39.856773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.580 qpair failed and we were unable to recover it. 00:29:15.842 [2024-06-10 10:54:39.866535] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.842 [2024-06-10 10:54:39.866593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.842 [2024-06-10 10:54:39.866605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.842 [2024-06-10 10:54:39.866611] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.842 [2024-06-10 10:54:39.866616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.842 [2024-06-10 10:54:39.866626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-06-10 10:54:39.876551] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.842 [2024-06-10 10:54:39.876605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.842 [2024-06-10 10:54:39.876617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.842 [2024-06-10 10:54:39.876622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.842 [2024-06-10 10:54:39.876627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.842 [2024-06-10 10:54:39.876637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-06-10 10:54:39.886705] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.842 [2024-06-10 10:54:39.886761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.842 [2024-06-10 10:54:39.886773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.842 [2024-06-10 10:54:39.886778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.842 [2024-06-10 10:54:39.886783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.842 [2024-06-10 10:54:39.886793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-06-10 10:54:39.896759] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.842 [2024-06-10 10:54:39.896816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.842 [2024-06-10 10:54:39.896828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.842 [2024-06-10 10:54:39.896833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.842 [2024-06-10 10:54:39.896838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.842 [2024-06-10 10:54:39.896851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-06-10 10:54:39.906772] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.842 [2024-06-10 10:54:39.906854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.842 [2024-06-10 10:54:39.906866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.842 [2024-06-10 10:54:39.906871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.842 [2024-06-10 10:54:39.906876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.842 [2024-06-10 10:54:39.906885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-06-10 10:54:39.916669] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.842 [2024-06-10 10:54:39.916730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.842 [2024-06-10 10:54:39.916742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.842 [2024-06-10 10:54:39.916746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.842 [2024-06-10 10:54:39.916751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.842 [2024-06-10 10:54:39.916761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-06-10 10:54:39.926693] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.842 [2024-06-10 10:54:39.926746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.842 [2024-06-10 10:54:39.926758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.842 [2024-06-10 10:54:39.926763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.842 [2024-06-10 10:54:39.926767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.842 [2024-06-10 10:54:39.926777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-06-10 10:54:39.936888] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.842 [2024-06-10 10:54:39.936949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.842 [2024-06-10 10:54:39.936960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.842 [2024-06-10 10:54:39.936965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.842 [2024-06-10 10:54:39.936970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.842 [2024-06-10 10:54:39.936980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.842 qpair failed and we were unable to recover it. 00:29:15.842 [2024-06-10 10:54:39.946867] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.842 [2024-06-10 10:54:39.946928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.842 [2024-06-10 10:54:39.946942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:39.946948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:39.946952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:39.946963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:39.956881] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:39.956934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:39.956946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:39.956951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:39.956956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:39.956966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:39.966920] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:39.966976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:39.966988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:39.966993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:39.966997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:39.967008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:39.976997] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:39.977057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:39.977069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:39.977074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:39.977078] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:39.977089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:39.986858] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:39.986921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:39.986932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:39.986938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:39.986945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:39.986955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:39.997020] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:39.997081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:39.997093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:39.997098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:39.997103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:39.997114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:40.007039] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:40.007095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:40.007109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:40.007115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:40.007120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:40.007131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:40.016981] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:40.017043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:40.017056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:40.017061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:40.017066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:40.017077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:40.027095] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:40.027190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:40.027202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:40.027208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:40.027212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:40.027223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:40.037036] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:40.037105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:40.037117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:40.037123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:40.037127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:40.037138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:40.047161] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:40.047216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:40.047229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:40.047234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:40.047238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:40.047252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:40.057215] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:40.057280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:40.057293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:40.057298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:40.057303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:40.057314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:40.067076] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:40.067129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:40.067145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:40.067150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.843 [2024-06-10 10:54:40.067155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.843 [2024-06-10 10:54:40.067167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.843 qpair failed and we were unable to recover it. 00:29:15.843 [2024-06-10 10:54:40.077288] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.843 [2024-06-10 10:54:40.077341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.843 [2024-06-10 10:54:40.077353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.843 [2024-06-10 10:54:40.077358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.844 [2024-06-10 10:54:40.077366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.844 [2024-06-10 10:54:40.077377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-06-10 10:54:40.087141] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.844 [2024-06-10 10:54:40.087193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.844 [2024-06-10 10:54:40.087206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.844 [2024-06-10 10:54:40.087211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.844 [2024-06-10 10:54:40.087216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.844 [2024-06-10 10:54:40.087227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-06-10 10:54:40.097357] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.844 [2024-06-10 10:54:40.097418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.844 [2024-06-10 10:54:40.097431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.844 [2024-06-10 10:54:40.097436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.844 [2024-06-10 10:54:40.097440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.844 [2024-06-10 10:54:40.097452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-06-10 10:54:40.107306] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.844 [2024-06-10 10:54:40.107362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.844 [2024-06-10 10:54:40.107375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.844 [2024-06-10 10:54:40.107380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.844 [2024-06-10 10:54:40.107385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.844 [2024-06-10 10:54:40.107395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-06-10 10:54:40.117343] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.844 [2024-06-10 10:54:40.117401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.844 [2024-06-10 10:54:40.117413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.844 [2024-06-10 10:54:40.117419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.844 [2024-06-10 10:54:40.117423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.844 [2024-06-10 10:54:40.117434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.844 qpair failed and we were unable to recover it. 00:29:15.844 [2024-06-10 10:54:40.127237] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.844 [2024-06-10 10:54:40.127296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.844 [2024-06-10 10:54:40.127308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.844 [2024-06-10 10:54:40.127314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.844 [2024-06-10 10:54:40.127318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:15.844 [2024-06-10 10:54:40.127329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:15.844 qpair failed and we were unable to recover it. 00:29:16.106 [2024-06-10 10:54:40.137453] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.106 [2024-06-10 10:54:40.137512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.106 [2024-06-10 10:54:40.137524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.106 [2024-06-10 10:54:40.137529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.106 [2024-06-10 10:54:40.137534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.106 [2024-06-10 10:54:40.137544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.106 qpair failed and we were unable to recover it. 00:29:16.106 [2024-06-10 10:54:40.147298] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.106 [2024-06-10 10:54:40.147353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.106 [2024-06-10 10:54:40.147366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.106 [2024-06-10 10:54:40.147371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.106 [2024-06-10 10:54:40.147376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.106 [2024-06-10 10:54:40.147387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.106 qpair failed and we were unable to recover it. 00:29:16.106 [2024-06-10 10:54:40.157424] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.106 [2024-06-10 10:54:40.157486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.106 [2024-06-10 10:54:40.157499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.106 [2024-06-10 10:54:40.157504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.106 [2024-06-10 10:54:40.157508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.106 [2024-06-10 10:54:40.157519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.106 qpair failed and we were unable to recover it. 00:29:16.106 [2024-06-10 10:54:40.167465] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.106 [2024-06-10 10:54:40.167519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.106 [2024-06-10 10:54:40.167531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.106 [2024-06-10 10:54:40.167539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.106 [2024-06-10 10:54:40.167544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.106 [2024-06-10 10:54:40.167554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.106 qpair failed and we were unable to recover it. 00:29:16.106 [2024-06-10 10:54:40.177409] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.106 [2024-06-10 10:54:40.177465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.106 [2024-06-10 10:54:40.177478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.106 [2024-06-10 10:54:40.177483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.106 [2024-06-10 10:54:40.177487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.106 [2024-06-10 10:54:40.177499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.106 qpair failed and we were unable to recover it. 00:29:16.106 [2024-06-10 10:54:40.187511] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.106 [2024-06-10 10:54:40.187569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.106 [2024-06-10 10:54:40.187581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.106 [2024-06-10 10:54:40.187587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.106 [2024-06-10 10:54:40.187591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.187602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.197620] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.197682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.197694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.197699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.197704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.197714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.207446] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.207495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.207508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.207513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.207517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.207528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.217623] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.217683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.217695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.217700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.217705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.217716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.227600] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.227670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.227683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.227688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.227693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.227703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.237645] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.237710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.237722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.237727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.237732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.237742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.247583] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.247638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.247650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.247655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.247660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.247670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.257741] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.257830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.257845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.257850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.257855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.257866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.267777] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.267840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.267853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.267858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.267863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.267874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.277761] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.277816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.277828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.277834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.277838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.277849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.287798] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.287848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.287860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.287866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.287870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.287882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.297954] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.298014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.298027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.298032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.298037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.298056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.307843] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.307903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.307916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.307921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.307926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.307937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.317865] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.317919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.107 [2024-06-10 10:54:40.317931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.107 [2024-06-10 10:54:40.317937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.107 [2024-06-10 10:54:40.317941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.107 [2024-06-10 10:54:40.317952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.107 qpair failed and we were unable to recover it. 00:29:16.107 [2024-06-10 10:54:40.327953] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.107 [2024-06-10 10:54:40.328018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.108 [2024-06-10 10:54:40.328031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.108 [2024-06-10 10:54:40.328036] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.108 [2024-06-10 10:54:40.328041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.108 [2024-06-10 10:54:40.328052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.108 qpair failed and we were unable to recover it. 00:29:16.108 [2024-06-10 10:54:40.337974] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.108 [2024-06-10 10:54:40.338035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.108 [2024-06-10 10:54:40.338047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.108 [2024-06-10 10:54:40.338052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.108 [2024-06-10 10:54:40.338057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.108 [2024-06-10 10:54:40.338068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.108 qpair failed and we were unable to recover it. 00:29:16.108 [2024-06-10 10:54:40.347972] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.108 [2024-06-10 10:54:40.348030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.108 [2024-06-10 10:54:40.348045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.108 [2024-06-10 10:54:40.348050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.108 [2024-06-10 10:54:40.348055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.108 [2024-06-10 10:54:40.348066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.108 qpair failed and we were unable to recover it. 00:29:16.108 [2024-06-10 10:54:40.357980] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.108 [2024-06-10 10:54:40.358155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.108 [2024-06-10 10:54:40.358168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.108 [2024-06-10 10:54:40.358173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.108 [2024-06-10 10:54:40.358178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.108 [2024-06-10 10:54:40.358189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.108 qpair failed and we were unable to recover it. 00:29:16.108 [2024-06-10 10:54:40.368004] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.108 [2024-06-10 10:54:40.368065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.108 [2024-06-10 10:54:40.368077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.108 [2024-06-10 10:54:40.368082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.108 [2024-06-10 10:54:40.368086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.108 [2024-06-10 10:54:40.368097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.108 qpair failed and we were unable to recover it. 00:29:16.108 [2024-06-10 10:54:40.378085] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.108 [2024-06-10 10:54:40.378144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.108 [2024-06-10 10:54:40.378157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.108 [2024-06-10 10:54:40.378162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.108 [2024-06-10 10:54:40.378167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.108 [2024-06-10 10:54:40.378177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.108 qpair failed and we were unable to recover it. 00:29:16.108 [2024-06-10 10:54:40.387964] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.108 [2024-06-10 10:54:40.388023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.108 [2024-06-10 10:54:40.388036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.108 [2024-06-10 10:54:40.388041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.108 [2024-06-10 10:54:40.388045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.108 [2024-06-10 10:54:40.388059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.108 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 10:54:40.398149] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.371 [2024-06-10 10:54:40.398212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.371 [2024-06-10 10:54:40.398224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.371 [2024-06-10 10:54:40.398229] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.371 [2024-06-10 10:54:40.398234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.371 [2024-06-10 10:54:40.398249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 10:54:40.408071] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.371 [2024-06-10 10:54:40.408129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.371 [2024-06-10 10:54:40.408141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.371 [2024-06-10 10:54:40.408147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.371 [2024-06-10 10:54:40.408151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.371 [2024-06-10 10:54:40.408162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 10:54:40.418053] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.371 [2024-06-10 10:54:40.418117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.371 [2024-06-10 10:54:40.418129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.371 [2024-06-10 10:54:40.418134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.371 [2024-06-10 10:54:40.418138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.371 [2024-06-10 10:54:40.418149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 10:54:40.428114] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.371 [2024-06-10 10:54:40.428176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.371 [2024-06-10 10:54:40.428199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.371 [2024-06-10 10:54:40.428205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.371 [2024-06-10 10:54:40.428209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.371 [2024-06-10 10:54:40.428221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 10:54:40.438184] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.371 [2024-06-10 10:54:40.438240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.371 [2024-06-10 10:54:40.438258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.371 [2024-06-10 10:54:40.438263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.371 [2024-06-10 10:54:40.438267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.371 [2024-06-10 10:54:40.438279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 10:54:40.448252] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.371 [2024-06-10 10:54:40.448313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.371 [2024-06-10 10:54:40.448326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.371 [2024-06-10 10:54:40.448331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.371 [2024-06-10 10:54:40.448336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.371 [2024-06-10 10:54:40.448347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 10:54:40.458288] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.371 [2024-06-10 10:54:40.458347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.371 [2024-06-10 10:54:40.458359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.371 [2024-06-10 10:54:40.458364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.371 [2024-06-10 10:54:40.458369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.371 [2024-06-10 10:54:40.458380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.371 qpair failed and we were unable to recover it. 00:29:16.371 [2024-06-10 10:54:40.468150] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.371 [2024-06-10 10:54:40.468209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.371 [2024-06-10 10:54:40.468222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.371 [2024-06-10 10:54:40.468227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.371 [2024-06-10 10:54:40.468231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.468246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.478292] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.478349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.478361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.478366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.478374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.478385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.488275] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.488363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.488375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.488380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.488384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.488395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.498407] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.498467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.498478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.498484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.498489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.498499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.508400] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.508455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.508467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.508472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.508477] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.508487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.518423] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.518478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.518489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.518494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.518499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.518509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.528482] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.528555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.528568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.528573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.528578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.528589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.538382] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.538442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.538453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.538458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.538463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.538474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.548490] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.548550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.548562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.548567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.548571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.548582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.558550] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.558606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.558617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.558622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.558627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.558637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.568569] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.568662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.568674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.568683] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.568687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.568699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.578653] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.578714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.578725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.578731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.578735] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.578745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.588606] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.588665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.588677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.588682] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.372 [2024-06-10 10:54:40.588686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.372 [2024-06-10 10:54:40.588696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.372 qpair failed and we were unable to recover it. 00:29:16.372 [2024-06-10 10:54:40.598628] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.372 [2024-06-10 10:54:40.598689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.372 [2024-06-10 10:54:40.598700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.372 [2024-06-10 10:54:40.598705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.373 [2024-06-10 10:54:40.598710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.373 [2024-06-10 10:54:40.598721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 10:54:40.608671] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.373 [2024-06-10 10:54:40.608728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.373 [2024-06-10 10:54:40.608739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.373 [2024-06-10 10:54:40.608744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.373 [2024-06-10 10:54:40.608749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.373 [2024-06-10 10:54:40.608759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 10:54:40.618714] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.373 [2024-06-10 10:54:40.618799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.373 [2024-06-10 10:54:40.618811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.373 [2024-06-10 10:54:40.618816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.373 [2024-06-10 10:54:40.618822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.373 [2024-06-10 10:54:40.618832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 10:54:40.628721] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.373 [2024-06-10 10:54:40.628883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.373 [2024-06-10 10:54:40.628896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.373 [2024-06-10 10:54:40.628901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.373 [2024-06-10 10:54:40.628906] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.373 [2024-06-10 10:54:40.628916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 10:54:40.638761] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.373 [2024-06-10 10:54:40.638820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.373 [2024-06-10 10:54:40.638839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.373 [2024-06-10 10:54:40.638845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.373 [2024-06-10 10:54:40.638850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.373 [2024-06-10 10:54:40.638864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.373 [2024-06-10 10:54:40.648773] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.373 [2024-06-10 10:54:40.648861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.373 [2024-06-10 10:54:40.648876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.373 [2024-06-10 10:54:40.648884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.373 [2024-06-10 10:54:40.648888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.373 [2024-06-10 10:54:40.648901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.373 qpair failed and we were unable to recover it. 00:29:16.636 [2024-06-10 10:54:40.658828] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.636 [2024-06-10 10:54:40.658926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.636 [2024-06-10 10:54:40.658943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.636 [2024-06-10 10:54:40.658948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.636 [2024-06-10 10:54:40.658953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.636 [2024-06-10 10:54:40.658964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.636 qpair failed and we were unable to recover it. 00:29:16.636 [2024-06-10 10:54:40.668836] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.636 [2024-06-10 10:54:40.668897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.636 [2024-06-10 10:54:40.668917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.636 [2024-06-10 10:54:40.668923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.636 [2024-06-10 10:54:40.668928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.636 [2024-06-10 10:54:40.668942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.636 qpair failed and we were unable to recover it. 00:29:16.636 [2024-06-10 10:54:40.678936] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.636 [2024-06-10 10:54:40.678996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.636 [2024-06-10 10:54:40.679015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.636 [2024-06-10 10:54:40.679021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.636 [2024-06-10 10:54:40.679026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.636 [2024-06-10 10:54:40.679040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.636 qpair failed and we were unable to recover it. 00:29:16.636 [2024-06-10 10:54:40.688890] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.636 [2024-06-10 10:54:40.688944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.636 [2024-06-10 10:54:40.688957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.636 [2024-06-10 10:54:40.688962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.636 [2024-06-10 10:54:40.688967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.636 [2024-06-10 10:54:40.688978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.636 qpair failed and we were unable to recover it. 00:29:16.636 [2024-06-10 10:54:40.698958] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.636 [2024-06-10 10:54:40.699024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.636 [2024-06-10 10:54:40.699043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.636 [2024-06-10 10:54:40.699049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.636 [2024-06-10 10:54:40.699054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.636 [2024-06-10 10:54:40.699068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.636 qpair failed and we were unable to recover it. 00:29:16.636 [2024-06-10 10:54:40.708945] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.709003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.709017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.709022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.709026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.709037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.718828] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.718885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.718898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.718903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.718907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.718918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.728872] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.728928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.728940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.728946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.728950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.728960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.739062] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.739124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.739136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.739141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.739145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.739156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.749048] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.749107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.749122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.749128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.749132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.749143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.759079] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.759134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.759146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.759151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.759156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.759167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.769122] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.769208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.769219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.769225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.769230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.769240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.779173] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.779231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.779247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.779252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.779257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.779268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.789158] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.789226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.789239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.789248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.789253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.789267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.799269] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.799333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.799345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.799350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.799355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.799366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.809094] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.809186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.809199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.809204] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.809209] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.809220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.819267] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.819321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.819333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.637 [2024-06-10 10:54:40.819338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.637 [2024-06-10 10:54:40.819343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.637 [2024-06-10 10:54:40.819354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.637 qpair failed and we were unable to recover it. 00:29:16.637 [2024-06-10 10:54:40.829327] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.637 [2024-06-10 10:54:40.829396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.637 [2024-06-10 10:54:40.829408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.829414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.829418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.829429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.638 [2024-06-10 10:54:40.839300] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.638 [2024-06-10 10:54:40.839359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.638 [2024-06-10 10:54:40.839376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.839382] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.839386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.839397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.638 [2024-06-10 10:54:40.849208] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.638 [2024-06-10 10:54:40.849265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.638 [2024-06-10 10:54:40.849278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.849284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.849288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.849299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.638 [2024-06-10 10:54:40.859382] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.638 [2024-06-10 10:54:40.859436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.638 [2024-06-10 10:54:40.859448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.859453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.859458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.859468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.638 [2024-06-10 10:54:40.869373] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.638 [2024-06-10 10:54:40.869429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.638 [2024-06-10 10:54:40.869441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.869447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.869451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.869462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.638 [2024-06-10 10:54:40.879397] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.638 [2024-06-10 10:54:40.879449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.638 [2024-06-10 10:54:40.879461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.879467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.879474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.879486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.638 [2024-06-10 10:54:40.889430] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.638 [2024-06-10 10:54:40.889487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.638 [2024-06-10 10:54:40.889499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.889504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.889509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.889519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.638 [2024-06-10 10:54:40.899474] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.638 [2024-06-10 10:54:40.899525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.638 [2024-06-10 10:54:40.899537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.899542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.899547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.899557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.638 [2024-06-10 10:54:40.909403] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.638 [2024-06-10 10:54:40.909474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.638 [2024-06-10 10:54:40.909487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.909495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.909499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.909510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.638 [2024-06-10 10:54:40.919381] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.638 [2024-06-10 10:54:40.919433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.638 [2024-06-10 10:54:40.919446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.638 [2024-06-10 10:54:40.919451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.638 [2024-06-10 10:54:40.919456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.638 [2024-06-10 10:54:40.919466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.638 qpair failed and we were unable to recover it. 00:29:16.900 [2024-06-10 10:54:40.929541] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.900 [2024-06-10 10:54:40.929599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.900 [2024-06-10 10:54:40.929611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.900 [2024-06-10 10:54:40.929616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.900 [2024-06-10 10:54:40.929621] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.900 [2024-06-10 10:54:40.929631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.900 qpair failed and we were unable to recover it. 00:29:16.900 [2024-06-10 10:54:40.939557] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.900 [2024-06-10 10:54:40.939611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.900 [2024-06-10 10:54:40.939623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.900 [2024-06-10 10:54:40.939628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.900 [2024-06-10 10:54:40.939633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.900 [2024-06-10 10:54:40.939643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.900 qpair failed and we were unable to recover it. 00:29:16.900 [2024-06-10 10:54:40.949607] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.900 [2024-06-10 10:54:40.949664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.900 [2024-06-10 10:54:40.949676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.900 [2024-06-10 10:54:40.949681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.900 [2024-06-10 10:54:40.949685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.900 [2024-06-10 10:54:40.949697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.900 qpair failed and we were unable to recover it. 00:29:16.900 [2024-06-10 10:54:40.959605] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.900 [2024-06-10 10:54:40.959661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.900 [2024-06-10 10:54:40.959672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.900 [2024-06-10 10:54:40.959677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.900 [2024-06-10 10:54:40.959682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.900 [2024-06-10 10:54:40.959692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.900 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:40.969638] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:40.969729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:40.969741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:40.969749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:40.969753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:40.969764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:40.979688] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:40.979743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:40.979754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:40.979759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:40.979764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:40.979774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:40.989696] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:40.989754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:40.989766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:40.989771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:40.989776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:40.989786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:40.999710] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:40.999768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:40.999779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:40.999784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:40.999789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:40.999799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:41.009628] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:41.009682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:41.009694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:41.009699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:41.009704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:41.009715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:41.019779] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:41.019832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:41.019844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:41.019849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:41.019854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:41.019864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:41.029802] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:41.029857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:41.029869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:41.029874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:41.029879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:41.029889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:41.039820] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:41.039875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:41.039894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:41.039900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:41.039905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:41.039919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:41.049868] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:41.049927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:41.049946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:41.049952] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:41.049956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:41.049970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:41.059821] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:41.059876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:41.059889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:41.059898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:41.059902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:41.059914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:41.069953] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:41.070014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:41.070027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:41.070032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:41.070037] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.901 [2024-06-10 10:54:41.070047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.901 qpair failed and we were unable to recover it. 00:29:16.901 [2024-06-10 10:54:41.079935] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.901 [2024-06-10 10:54:41.079987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.901 [2024-06-10 10:54:41.079999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.901 [2024-06-10 10:54:41.080005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.901 [2024-06-10 10:54:41.080009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.080020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.089843] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.089895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.089907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.089913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.089917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.089928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.099989] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.100041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.100052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.100057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.100062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.100073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.110026] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.110082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.110094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.110099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.110103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.110113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.120034] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.120085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.120097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.120102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.120106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.120117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.130077] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.130129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.130140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.130145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.130150] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.130160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.140106] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.140158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.140170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.140175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.140180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.140190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.149989] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.150044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.150058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.150063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.150067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.150078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.160177] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.160252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.160264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.160269] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.160273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.160284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.170176] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.170226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.170237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.170245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.170250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.170260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:16.902 [2024-06-10 10:54:41.180214] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.902 [2024-06-10 10:54:41.180269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.902 [2024-06-10 10:54:41.180281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.902 [2024-06-10 10:54:41.180286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.902 [2024-06-10 10:54:41.180290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:16.902 [2024-06-10 10:54:41.180300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:16.902 qpair failed and we were unable to recover it. 00:29:17.165 [2024-06-10 10:54:41.190265] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.165 [2024-06-10 10:54:41.190324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.165 [2024-06-10 10:54:41.190336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.165 [2024-06-10 10:54:41.190341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.165 [2024-06-10 10:54:41.190345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.165 [2024-06-10 10:54:41.190359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.165 qpair failed and we were unable to recover it. 00:29:17.165 [2024-06-10 10:54:41.200264] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.165 [2024-06-10 10:54:41.200359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.165 [2024-06-10 10:54:41.200371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.165 [2024-06-10 10:54:41.200376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.165 [2024-06-10 10:54:41.200381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.165 [2024-06-10 10:54:41.200391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.165 qpair failed and we were unable to recover it. 00:29:17.165 [2024-06-10 10:54:41.210296] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.165 [2024-06-10 10:54:41.210346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.165 [2024-06-10 10:54:41.210358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.165 [2024-06-10 10:54:41.210363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.165 [2024-06-10 10:54:41.210367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.165 [2024-06-10 10:54:41.210377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.165 qpair failed and we were unable to recover it. 00:29:17.165 [2024-06-10 10:54:41.220216] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.165 [2024-06-10 10:54:41.220271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.165 [2024-06-10 10:54:41.220284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.165 [2024-06-10 10:54:41.220289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.165 [2024-06-10 10:54:41.220293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.165 [2024-06-10 10:54:41.220304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.165 qpair failed and we were unable to recover it. 00:29:17.165 [2024-06-10 10:54:41.230241] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.165 [2024-06-10 10:54:41.230296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.165 [2024-06-10 10:54:41.230309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.165 [2024-06-10 10:54:41.230314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.165 [2024-06-10 10:54:41.230319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.165 [2024-06-10 10:54:41.230330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.165 qpair failed and we were unable to recover it. 00:29:17.165 [2024-06-10 10:54:41.240382] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.165 [2024-06-10 10:54:41.240432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.165 [2024-06-10 10:54:41.240447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.165 [2024-06-10 10:54:41.240452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.165 [2024-06-10 10:54:41.240456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.165 [2024-06-10 10:54:41.240467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.165 qpair failed and we were unable to recover it. 00:29:17.165 [2024-06-10 10:54:41.250421] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.165 [2024-06-10 10:54:41.250547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.165 [2024-06-10 10:54:41.250559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.165 [2024-06-10 10:54:41.250564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.165 [2024-06-10 10:54:41.250569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.165 [2024-06-10 10:54:41.250580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.165 qpair failed and we were unable to recover it. 00:29:17.165 [2024-06-10 10:54:41.260447] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.260497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.260509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.260514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.260519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.260529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.270378] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.270440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.270452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.270457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.270462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.270472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.280451] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.280501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.280513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.280518] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.280524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.280535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.290484] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.290536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.290548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.290553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.290557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.290567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.300548] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.300599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.300610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.300616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.300620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.300631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.310566] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.310622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.310634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.310638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.310643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.310653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.320584] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.320689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.320701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.320706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.320710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.320721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.330616] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.330672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.330684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.330688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.330693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.330703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.340651] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.340703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.340715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.340720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.340724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.340734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.350582] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.350635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.350647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.350652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.350656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.350666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.360707] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.360760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.360771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.360776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.360781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.360791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.370700] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.370750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.370762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.370767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.370774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.370784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.380758] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.166 [2024-06-10 10:54:41.380811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.166 [2024-06-10 10:54:41.380822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.166 [2024-06-10 10:54:41.380828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.166 [2024-06-10 10:54:41.380832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.166 [2024-06-10 10:54:41.380842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.166 qpair failed and we were unable to recover it. 00:29:17.166 [2024-06-10 10:54:41.390789] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.167 [2024-06-10 10:54:41.390882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.167 [2024-06-10 10:54:41.390894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.167 [2024-06-10 10:54:41.390899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.167 [2024-06-10 10:54:41.390903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.167 [2024-06-10 10:54:41.390913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.167 qpair failed and we were unable to recover it. 00:29:17.167 [2024-06-10 10:54:41.400791] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.167 [2024-06-10 10:54:41.400843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.167 [2024-06-10 10:54:41.400855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.167 [2024-06-10 10:54:41.400860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.167 [2024-06-10 10:54:41.400865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.167 [2024-06-10 10:54:41.400875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.167 qpair failed and we were unable to recover it. 00:29:17.167 [2024-06-10 10:54:41.410824] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.167 [2024-06-10 10:54:41.410873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.167 [2024-06-10 10:54:41.410885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.167 [2024-06-10 10:54:41.410890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.167 [2024-06-10 10:54:41.410894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.167 [2024-06-10 10:54:41.410904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.167 qpair failed and we were unable to recover it. 00:29:17.167 [2024-06-10 10:54:41.420853] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.167 [2024-06-10 10:54:41.420905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.167 [2024-06-10 10:54:41.420917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.167 [2024-06-10 10:54:41.420922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.167 [2024-06-10 10:54:41.420927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.167 [2024-06-10 10:54:41.420937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.167 qpair failed and we were unable to recover it. 00:29:17.167 [2024-06-10 10:54:41.430887] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.167 [2024-06-10 10:54:41.430952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.167 [2024-06-10 10:54:41.430971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.167 [2024-06-10 10:54:41.430978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.167 [2024-06-10 10:54:41.430982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.167 [2024-06-10 10:54:41.430996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.167 qpair failed and we were unable to recover it. 00:29:17.167 [2024-06-10 10:54:41.440839] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.167 [2024-06-10 10:54:41.440892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.167 [2024-06-10 10:54:41.440905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.167 [2024-06-10 10:54:41.440910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.167 [2024-06-10 10:54:41.440915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.167 [2024-06-10 10:54:41.440926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.167 qpair failed and we were unable to recover it. 00:29:17.167 [2024-06-10 10:54:41.450926] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.167 [2024-06-10 10:54:41.450981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.167 [2024-06-10 10:54:41.451000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.167 [2024-06-10 10:54:41.451006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.167 [2024-06-10 10:54:41.451011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.167 [2024-06-10 10:54:41.451025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.167 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.460843] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.460902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.431 [2024-06-10 10:54:41.460915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.431 [2024-06-10 10:54:41.460924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.431 [2024-06-10 10:54:41.460928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.431 [2024-06-10 10:54:41.460940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.431 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.470850] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.470910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.431 [2024-06-10 10:54:41.470922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.431 [2024-06-10 10:54:41.470928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.431 [2024-06-10 10:54:41.470932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.431 [2024-06-10 10:54:41.470943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.431 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.480998] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.481057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.431 [2024-06-10 10:54:41.481075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.431 [2024-06-10 10:54:41.481082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.431 [2024-06-10 10:54:41.481086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.431 [2024-06-10 10:54:41.481100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.431 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.491030] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.491092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.431 [2024-06-10 10:54:41.491111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.431 [2024-06-10 10:54:41.491117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.431 [2024-06-10 10:54:41.491122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.431 [2024-06-10 10:54:41.491136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.431 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.501047] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.501103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.431 [2024-06-10 10:54:41.501122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.431 [2024-06-10 10:54:41.501128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.431 [2024-06-10 10:54:41.501133] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.431 [2024-06-10 10:54:41.501147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.431 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.511087] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.511143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.431 [2024-06-10 10:54:41.511157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.431 [2024-06-10 10:54:41.511162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.431 [2024-06-10 10:54:41.511166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.431 [2024-06-10 10:54:41.511178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.431 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.521113] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.521167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.431 [2024-06-10 10:54:41.521179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.431 [2024-06-10 10:54:41.521184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.431 [2024-06-10 10:54:41.521188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.431 [2024-06-10 10:54:41.521199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.431 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.531126] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.531180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.431 [2024-06-10 10:54:41.531193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.431 [2024-06-10 10:54:41.531198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.431 [2024-06-10 10:54:41.531202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.431 [2024-06-10 10:54:41.531213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.431 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.541214] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.541274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.431 [2024-06-10 10:54:41.541286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.431 [2024-06-10 10:54:41.541291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.431 [2024-06-10 10:54:41.541295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.431 [2024-06-10 10:54:41.541306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.431 qpair failed and we were unable to recover it. 00:29:17.431 [2024-06-10 10:54:41.551177] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.431 [2024-06-10 10:54:41.551255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.551271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.551276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.551280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.551292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.561202] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.561254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.561267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.561272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.561276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.561287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.571245] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.571299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.571311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.571316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.571321] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.571331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.581283] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.581336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.581348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.581353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.581358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.581368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.591163] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.591222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.591234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.591239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.591248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.591265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.601311] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.601372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.601384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.601389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.601394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.601405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.611347] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.611410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.611423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.611428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.611432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.611443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.621257] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.621336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.621348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.621353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.621358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.621369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.631400] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.631473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.631485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.631491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.631495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.631505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.641414] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.641462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.641477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.641482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.432 [2024-06-10 10:54:41.641486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.432 [2024-06-10 10:54:41.641496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.432 qpair failed and we were unable to recover it. 00:29:17.432 [2024-06-10 10:54:41.651475] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.432 [2024-06-10 10:54:41.651527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.432 [2024-06-10 10:54:41.651539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.432 [2024-06-10 10:54:41.651544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.433 [2024-06-10 10:54:41.651548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.433 [2024-06-10 10:54:41.651558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.433 qpair failed and we were unable to recover it. 00:29:17.433 [2024-06-10 10:54:41.661562] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.433 [2024-06-10 10:54:41.661616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.433 [2024-06-10 10:54:41.661628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.433 [2024-06-10 10:54:41.661633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.433 [2024-06-10 10:54:41.661637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.433 [2024-06-10 10:54:41.661648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.433 qpair failed and we were unable to recover it. 00:29:17.433 [2024-06-10 10:54:41.671518] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.433 [2024-06-10 10:54:41.671575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.433 [2024-06-10 10:54:41.671587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.433 [2024-06-10 10:54:41.671592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.433 [2024-06-10 10:54:41.671597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.433 [2024-06-10 10:54:41.671607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.433 qpair failed and we were unable to recover it. 00:29:17.433 [2024-06-10 10:54:41.681532] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.433 [2024-06-10 10:54:41.681598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.433 [2024-06-10 10:54:41.681611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.433 [2024-06-10 10:54:41.681616] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.433 [2024-06-10 10:54:41.681626] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.433 [2024-06-10 10:54:41.681639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.433 qpair failed and we were unable to recover it. 00:29:17.433 [2024-06-10 10:54:41.691562] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.433 [2024-06-10 10:54:41.691622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.433 [2024-06-10 10:54:41.691634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.433 [2024-06-10 10:54:41.691639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.433 [2024-06-10 10:54:41.691644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.433 [2024-06-10 10:54:41.691654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.433 qpair failed and we were unable to recover it. 00:29:17.433 [2024-06-10 10:54:41.701677] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.433 [2024-06-10 10:54:41.701739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.433 [2024-06-10 10:54:41.701751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.433 [2024-06-10 10:54:41.701756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.433 [2024-06-10 10:54:41.701761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.433 [2024-06-10 10:54:41.701771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.433 qpair failed and we were unable to recover it. 00:29:17.433 [2024-06-10 10:54:41.711627] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.433 [2024-06-10 10:54:41.711683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.433 [2024-06-10 10:54:41.711695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.433 [2024-06-10 10:54:41.711700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.433 [2024-06-10 10:54:41.711705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.433 [2024-06-10 10:54:41.711715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.433 qpair failed and we were unable to recover it. 00:29:17.696 [2024-06-10 10:54:41.721654] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.696 [2024-06-10 10:54:41.721705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.696 [2024-06-10 10:54:41.721717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.696 [2024-06-10 10:54:41.721722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.696 [2024-06-10 10:54:41.721727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.696 [2024-06-10 10:54:41.721738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.696 qpair failed and we were unable to recover it. 00:29:17.696 [2024-06-10 10:54:41.731546] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.696 [2024-06-10 10:54:41.731601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.696 [2024-06-10 10:54:41.731612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.696 [2024-06-10 10:54:41.731618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.696 [2024-06-10 10:54:41.731622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.696 [2024-06-10 10:54:41.731632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.696 qpair failed and we were unable to recover it. 00:29:17.696 [2024-06-10 10:54:41.741712] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.741763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.741775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.741780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.741784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.741794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.751743] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.751804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.751816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.751821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.751825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.751835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.761639] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.761694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.761706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.761712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.761716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.761726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.771800] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.771882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.771895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.771900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.771908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.771918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.781771] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.781891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.781904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.781909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.781913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.781924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.791838] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.791902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.791915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.791920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.791925] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.791936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.801884] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.801933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.801946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.801950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.801955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.801966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.811764] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.811823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.811835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.811841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.811845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.811855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.822024] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.822078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.822091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.822096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.822100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.822111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.831946] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.832002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.832014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.832020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.832024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.832035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.841959] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.842059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.842072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.842078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.842082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.842093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.851999] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.852053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.852072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.852078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.852083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.852097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.861910] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.861965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.861984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.697 [2024-06-10 10:54:41.861993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.697 [2024-06-10 10:54:41.861998] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.697 [2024-06-10 10:54:41.862012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.697 qpair failed and we were unable to recover it. 00:29:17.697 [2024-06-10 10:54:41.872059] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.697 [2024-06-10 10:54:41.872116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.697 [2024-06-10 10:54:41.872129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.872135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.872139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.872151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.882087] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.882171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.882184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.882190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.882195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.882206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.892105] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.892200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.892213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.892219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.892224] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.892235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.902140] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.902195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.902207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.902212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.902217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.902228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.912102] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.912157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.912169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.912174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.912178] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.912188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.922107] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.922154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.922166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.922171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.922176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.922186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.932218] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.932270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.932282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.932287] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.932292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.932302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.942241] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.942297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.942309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.942314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.942319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.942329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.952179] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.952287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.952303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.952308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.952313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.952323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.962286] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.962336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.962347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.962353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.962357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.962367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.972324] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.972376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.972388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.972393] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.972397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.972408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.698 [2024-06-10 10:54:41.982328] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.698 [2024-06-10 10:54:41.982383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.698 [2024-06-10 10:54:41.982396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.698 [2024-06-10 10:54:41.982401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.698 [2024-06-10 10:54:41.982405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.698 [2024-06-10 10:54:41.982415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.698 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:41.992488] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:41.992546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:41.992558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:41.992563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:41.992567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:41.992580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.002401] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.002450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.002462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.002467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.002472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.002483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.012296] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.012342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.012354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.012359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.012364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.012375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.022434] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.022491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.022503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.022508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.022512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.022523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.032508] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.032563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.032575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.032580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.032584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.032595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.042477] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.042536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.042551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.042556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.042560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.042571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.052523] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.052626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.052639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.052644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.052649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.052659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.062520] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.062574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.062585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.062591] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.062595] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.062605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.072619] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.072683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.072695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.072700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.072704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.072716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.082610] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.082676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.082688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.082693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.082697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.082712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.092618] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.092673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.092685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.961 [2024-06-10 10:54:42.092690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.961 [2024-06-10 10:54:42.092695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.961 [2024-06-10 10:54:42.092705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.961 qpair failed and we were unable to recover it. 00:29:17.961 [2024-06-10 10:54:42.102663] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.961 [2024-06-10 10:54:42.102761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.961 [2024-06-10 10:54:42.102773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.102779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.102783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.102794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.112699] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.112758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.112770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.112776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.112781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.112791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.122658] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.122721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.122733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.122738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.122742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.122753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.132774] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.132836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.132849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.132854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.132858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.132868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.142674] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.142728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.142740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.142746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.142751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.142761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.152801] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.152892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.152904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.152910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.152915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.152925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.162855] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.162911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.162924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.162930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.162935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.162945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.172841] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.172890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.172902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.172908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.172915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.172926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.182893] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.182944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.182955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.182961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.182965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.182976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.192894] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.192954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.192966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.192972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.192976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.192987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.202910] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.203007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.203020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.203025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.203029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.203040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.212955] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.213007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.213019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.213024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.213029] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.213039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.222999] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.223053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.223065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.223070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.223074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.223085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.962 qpair failed and we were unable to recover it. 00:29:17.962 [2024-06-10 10:54:42.233003] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.962 [2024-06-10 10:54:42.233061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.962 [2024-06-10 10:54:42.233073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.962 [2024-06-10 10:54:42.233078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.962 [2024-06-10 10:54:42.233083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.962 [2024-06-10 10:54:42.233093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.963 qpair failed and we were unable to recover it. 00:29:17.963 [2024-06-10 10:54:42.242909] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.963 [2024-06-10 10:54:42.242960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.963 [2024-06-10 10:54:42.242972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.963 [2024-06-10 10:54:42.242977] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.963 [2024-06-10 10:54:42.242982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:17.963 [2024-06-10 10:54:42.242992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.963 qpair failed and we were unable to recover it. 00:29:18.225 [2024-06-10 10:54:42.253133] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.225 [2024-06-10 10:54:42.253189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.225 [2024-06-10 10:54:42.253201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.225 [2024-06-10 10:54:42.253205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.225 [2024-06-10 10:54:42.253210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.225 [2024-06-10 10:54:42.253220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.225 qpair failed and we were unable to recover it. 00:29:18.225 [2024-06-10 10:54:42.263101] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.225 [2024-06-10 10:54:42.263163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.225 [2024-06-10 10:54:42.263176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.225 [2024-06-10 10:54:42.263184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.225 [2024-06-10 10:54:42.263188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.225 [2024-06-10 10:54:42.263199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.225 qpair failed and we were unable to recover it. 00:29:18.225 [2024-06-10 10:54:42.273107] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.225 [2024-06-10 10:54:42.273205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.225 [2024-06-10 10:54:42.273218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.225 [2024-06-10 10:54:42.273223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.225 [2024-06-10 10:54:42.273228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.225 [2024-06-10 10:54:42.273238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.225 qpair failed and we were unable to recover it. 00:29:18.225 [2024-06-10 10:54:42.283144] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.225 [2024-06-10 10:54:42.283199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.225 [2024-06-10 10:54:42.283211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.225 [2024-06-10 10:54:42.283216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.225 [2024-06-10 10:54:42.283221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.225 [2024-06-10 10:54:42.283231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.225 qpair failed and we were unable to recover it. 00:29:18.225 [2024-06-10 10:54:42.293126] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.225 [2024-06-10 10:54:42.293181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.225 [2024-06-10 10:54:42.293193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.225 [2024-06-10 10:54:42.293198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.225 [2024-06-10 10:54:42.293203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.225 [2024-06-10 10:54:42.293213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.225 qpair failed and we were unable to recover it. 00:29:18.225 [2024-06-10 10:54:42.303191] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.225 [2024-06-10 10:54:42.303248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.225 [2024-06-10 10:54:42.303261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.225 [2024-06-10 10:54:42.303266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.225 [2024-06-10 10:54:42.303270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.225 [2024-06-10 10:54:42.303282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.225 qpair failed and we were unable to recover it. 00:29:18.225 [2024-06-10 10:54:42.313245] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.225 [2024-06-10 10:54:42.313301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.225 [2024-06-10 10:54:42.313313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.225 [2024-06-10 10:54:42.313318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.225 [2024-06-10 10:54:42.313322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.225 [2024-06-10 10:54:42.313333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.225 qpair failed and we were unable to recover it. 00:29:18.225 [2024-06-10 10:54:42.323249] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.323328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.323340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.323345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.323350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.323361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.333276] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.333324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.333336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.333341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.333345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.333356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.343181] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.343233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.343249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.343254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.343258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.343269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.353335] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.353390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.353408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.353413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.353418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.353428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.363353] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.363407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.363419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.363424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.363429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.363439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.373275] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.373333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.373345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.373350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.373355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.373365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.383411] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.383466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.383477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.383483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.383487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.383497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.393320] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.393378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.393390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.393395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.393400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.393410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.403339] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.403392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.403404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.403409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.403414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.403424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.413504] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.413556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.413568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.413573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.413577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.413587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.423621] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.423672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.423684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.423689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.423693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.423703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.433539] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.433596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.433608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.433613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.433617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.433628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.443559] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.443654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.443670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.443675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.443680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.226 [2024-06-10 10:54:42.443691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.226 qpair failed and we were unable to recover it. 00:29:18.226 [2024-06-10 10:54:42.453589] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.226 [2024-06-10 10:54:42.453653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.226 [2024-06-10 10:54:42.453665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.226 [2024-06-10 10:54:42.453671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.226 [2024-06-10 10:54:42.453675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.227 [2024-06-10 10:54:42.453686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-06-10 10:54:42.463606] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-06-10 10:54:42.463661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-06-10 10:54:42.463673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-06-10 10:54:42.463679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-06-10 10:54:42.463683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.227 [2024-06-10 10:54:42.463694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-06-10 10:54:42.473647] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-06-10 10:54:42.473703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-06-10 10:54:42.473715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-06-10 10:54:42.473721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-06-10 10:54:42.473726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.227 [2024-06-10 10:54:42.473736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-06-10 10:54:42.483682] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-06-10 10:54:42.483733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-06-10 10:54:42.483745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-06-10 10:54:42.483750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-06-10 10:54:42.483755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.227 [2024-06-10 10:54:42.483768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-06-10 10:54:42.493695] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-06-10 10:54:42.493752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-06-10 10:54:42.493765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-06-10 10:54:42.493771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-06-10 10:54:42.493775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.227 [2024-06-10 10:54:42.493786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.227 [2024-06-10 10:54:42.503791] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.227 [2024-06-10 10:54:42.503845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.227 [2024-06-10 10:54:42.503858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.227 [2024-06-10 10:54:42.503863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.227 [2024-06-10 10:54:42.503867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.227 [2024-06-10 10:54:42.503878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.227 qpair failed and we were unable to recover it. 00:29:18.488 [2024-06-10 10:54:42.513781] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.488 [2024-06-10 10:54:42.513841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.488 [2024-06-10 10:54:42.513854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.488 [2024-06-10 10:54:42.513859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.488 [2024-06-10 10:54:42.513864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.488 [2024-06-10 10:54:42.513874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.488 qpair failed and we were unable to recover it. 00:29:18.488 [2024-06-10 10:54:42.523753] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.488 [2024-06-10 10:54:42.523808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.488 [2024-06-10 10:54:42.523820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.488 [2024-06-10 10:54:42.523825] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.488 [2024-06-10 10:54:42.523830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.488 [2024-06-10 10:54:42.523841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.488 qpair failed and we were unable to recover it. 00:29:18.488 [2024-06-10 10:54:42.533822] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.488 [2024-06-10 10:54:42.533873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.488 [2024-06-10 10:54:42.533888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.488 [2024-06-10 10:54:42.533893] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.488 [2024-06-10 10:54:42.533898] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.488 [2024-06-10 10:54:42.533908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.488 qpair failed and we were unable to recover it. 00:29:18.488 [2024-06-10 10:54:42.543842] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.488 [2024-06-10 10:54:42.543899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.488 [2024-06-10 10:54:42.543917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.488 [2024-06-10 10:54:42.543924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.488 [2024-06-10 10:54:42.543928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b5c000b90 00:29:18.488 [2024-06-10 10:54:42.543942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:18.488 qpair failed and we were unable to recover it. 00:29:18.488 [2024-06-10 10:54:42.553855] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.488 [2024-06-10 10:54:42.553933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.488 [2024-06-10 10:54:42.553959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.489 [2024-06-10 10:54:42.553968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.489 [2024-06-10 10:54:42.553975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13458c0 00:29:18.489 [2024-06-10 10:54:42.553993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.489 qpair failed and we were unable to recover it. 00:29:18.489 [2024-06-10 10:54:42.563837] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.489 [2024-06-10 10:54:42.563906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.489 [2024-06-10 10:54:42.563931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.489 [2024-06-10 10:54:42.563940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.489 [2024-06-10 10:54:42.563947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13458c0 00:29:18.489 [2024-06-10 10:54:42.563966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.489 qpair failed and we were unable to recover it. 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 [2024-06-10 10:54:42.564803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.489 [2024-06-10 10:54:42.573956] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.489 [2024-06-10 10:54:42.574103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.489 [2024-06-10 10:54:42.574153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.489 [2024-06-10 10:54:42.574177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.489 [2024-06-10 10:54:42.574196] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b64000b90 00:29:18.489 [2024-06-10 10:54:42.574241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.489 qpair failed and we were unable to recover it. 00:29:18.489 [2024-06-10 10:54:42.583934] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.489 [2024-06-10 10:54:42.584075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.489 [2024-06-10 10:54:42.584103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.489 [2024-06-10 10:54:42.584117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.489 [2024-06-10 10:54:42.584130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b64000b90 00:29:18.489 [2024-06-10 10:54:42.584158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:18.489 qpair failed and we were unable to recover it. 00:29:18.489 [2024-06-10 10:54:42.584548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13435d0 is same with the state(5) to be set 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Read completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 Write completed with error (sct=0, sc=8) 00:29:18.489 starting I/O failed 00:29:18.489 [2024-06-10 10:54:42.585344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.489 [2024-06-10 10:54:42.594013] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.489 [2024-06-10 10:54:42.594166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.489 [2024-06-10 10:54:42.594216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.489 [2024-06-10 10:54:42.594238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.489 [2024-06-10 10:54:42.594269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b54000b90 00:29:18.489 [2024-06-10 10:54:42.594317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.489 qpair failed and we were unable to recover it. 00:29:18.489 [2024-06-10 10:54:42.604028] ctrlr.c: 756:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.489 [2024-06-10 10:54:42.604140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.489 [2024-06-10 10:54:42.604170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.489 [2024-06-10 10:54:42.604186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.489 [2024-06-10 10:54:42.604200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b54000b90 00:29:18.489 [2024-06-10 10:54:42.604230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.489 qpair failed and we were unable to recover it. 00:29:18.490 [2024-06-10 10:54:42.604773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13435d0 (9): Bad file descriptor 00:29:18.490 Initializing NVMe Controllers 00:29:18.490 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:18.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:18.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:18.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:18.490 Initialization complete. Launching workers. 00:29:18.490 Starting thread on core 1 00:29:18.490 Starting thread on core 2 00:29:18.490 Starting thread on core 3 00:29:18.490 Starting thread on core 0 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:18.490 00:29:18.490 real 0m11.290s 00:29:18.490 user 0m20.664s 00:29:18.490 sys 0m3.886s 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.490 ************************************ 00:29:18.490 END TEST nvmf_target_disconnect_tc2 00:29:18.490 ************************************ 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:18.490 rmmod nvme_tcp 00:29:18.490 rmmod nvme_fabrics 00:29:18.490 rmmod nvme_keyring 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1024965 ']' 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1024965 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1024965 ']' 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 1024965 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:18.490 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1024965 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1024965' 00:29:18.750 killing process with pid 1024965 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 1024965 00:29:18.750 [2024-06-10 10:54:42.803748] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 1024965 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.750 10:54:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.328 10:54:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:21.328 00:29:21.328 real 0m21.240s 00:29:21.328 user 0m48.248s 00:29:21.328 sys 0m9.650s 00:29:21.328 10:54:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:21.328 10:54:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:21.328 ************************************ 00:29:21.328 END TEST nvmf_target_disconnect 00:29:21.328 ************************************ 00:29:21.328 10:54:45 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:21.328 10:54:45 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:21.328 10:54:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.328 10:54:45 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:21.328 00:29:21.328 real 22m33.984s 00:29:21.328 user 47m21.158s 00:29:21.328 sys 7m8.399s 00:29:21.328 10:54:45 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:21.328 10:54:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.328 ************************************ 00:29:21.328 END TEST nvmf_tcp 00:29:21.328 ************************************ 00:29:21.328 10:54:45 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:21.329 10:54:45 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:21.329 10:54:45 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:21.329 10:54:45 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:21.329 10:54:45 -- common/autotest_common.sh@10 -- # set +x 00:29:21.329 ************************************ 00:29:21.329 START TEST spdkcli_nvmf_tcp 00:29:21.329 ************************************ 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:21.329 * Looking for test storage... 00:29:21.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1026796 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1026796 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 1026796 ']' 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:21.329 10:54:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.329 [2024-06-10 10:54:45.333672] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:29:21.329 [2024-06-10 10:54:45.333741] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026796 ] 00:29:21.329 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.329 [2024-06-10 10:54:45.398166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:21.329 [2024-06-10 10:54:45.473824] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.329 [2024-06-10 10:54:45.473828] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.903 10:54:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:21.903 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:21.903 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:21.903 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:21.903 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:21.903 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:21.903 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:21.903 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:21.903 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:21.903 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:21.903 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:21.903 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:21.903 ' 00:29:24.448 [2024-06-10 10:54:48.472193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.388 [2024-06-10 10:54:49.635658] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:25.388 [2024-06-10 10:54:49.635992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:27.929 [2024-06-10 10:54:51.774428] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:29.841 [2024-06-10 10:54:53.607826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:30.783 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:30.783 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:30.783 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:30.783 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:30.783 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:30.783 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:30.783 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:30.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:30.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:30.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:30.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:30.783 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:31.044 10:54:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:31.044 10:54:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:31.044 10:54:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.044 10:54:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:31.044 10:54:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:31.044 10:54:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.044 10:54:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:31.044 10:54:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:31.305 10:54:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:31.305 10:54:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:31.305 10:54:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:31.305 10:54:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:31.305 10:54:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.565 10:54:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:31.565 10:54:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:31.565 10:54:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.565 10:54:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:31.565 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:31.565 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:31.565 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:31.565 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:31.565 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:31.565 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:31.565 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:31.565 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:31.565 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:31.565 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:31.565 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:31.565 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:31.565 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:31.565 ' 00:29:36.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:36.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:36.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:36.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:36.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:36.855 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:36.855 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:36.855 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:36.855 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:36.855 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:36.855 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:36.855 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:36.855 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:36.855 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1026796 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1026796 ']' 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1026796 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1026796 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1026796' 00:29:36.855 killing process with pid 1026796 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 1026796 00:29:36.855 [2024-06-10 10:55:00.573807] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 1026796 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1026796 ']' 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1026796 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 1026796 ']' 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 1026796 00:29:36.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1026796) - No such process 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 1026796 is not found' 00:29:36.855 Process with pid 1026796 is not found 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:36.855 00:29:36.855 real 0m15.564s 00:29:36.855 user 0m32.060s 00:29:36.855 sys 0m0.689s 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:36.855 10:55:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.855 ************************************ 00:29:36.855 END TEST spdkcli_nvmf_tcp 00:29:36.855 ************************************ 00:29:36.855 10:55:00 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:36.855 10:55:00 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:36.855 10:55:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:36.855 10:55:00 -- common/autotest_common.sh@10 -- # set +x 00:29:36.855 ************************************ 00:29:36.855 START TEST nvmf_identify_passthru 00:29:36.855 ************************************ 00:29:36.855 10:55:00 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:36.855 * Looking for test storage... 00:29:36.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:36.855 10:55:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.855 10:55:00 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.855 10:55:00 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.855 10:55:00 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.855 10:55:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.855 10:55:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.855 10:55:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.855 10:55:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:36.855 10:55:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.855 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:36.856 10:55:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.856 10:55:00 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.856 10:55:00 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.856 10:55:00 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.856 10:55:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.856 10:55:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.856 10:55:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.856 10:55:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:36.856 10:55:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.856 10:55:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.856 10:55:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:36.856 10:55:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:36.856 10:55:00 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:36.856 10:55:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:45.001 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:45.001 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:45.001 Found net devices under 0000:31:00.0: cvl_0_0 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:45.001 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:45.002 Found net devices under 0000:31:00.1: cvl_0_1 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.002 10:55:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:45.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:29:45.002 00:29:45.002 --- 10.0.0.2 ping statistics --- 00:29:45.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.002 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:29:45.002 00:29:45.002 --- 10.0.0.1 ping statistics --- 00:29:45.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.002 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:45.002 10:55:08 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:29:45.002 10:55:08 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:45.002 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:45.002 10:55:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:45.002 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.263 10:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:29:45.263 10:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.263 10:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.263 10:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1033844 00:29:45.263 10:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:45.263 10:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:45.263 10:55:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1033844 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 1033844 ']' 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:45.263 10:55:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:45.263 [2024-06-10 10:55:09.455175] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:29:45.263 [2024-06-10 10:55:09.455238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.263 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.263 [2024-06-10 10:55:09.525269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:45.524 [2024-06-10 10:55:09.599591] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.524 [2024-06-10 10:55:09.599645] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.524 [2024-06-10 10:55:09.599653] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.524 [2024-06-10 10:55:09.599659] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.524 [2024-06-10 10:55:09.599665] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.524 [2024-06-10 10:55:09.599805] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.524 [2024-06-10 10:55:09.599918] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.524 [2024-06-10 10:55:09.600070] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.524 [2024-06-10 10:55:09.600070] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:29:46.096 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.096 INFO: Log level set to 20 00:29:46.096 INFO: Requests: 00:29:46.096 { 00:29:46.096 "jsonrpc": "2.0", 00:29:46.096 "method": "nvmf_set_config", 00:29:46.096 "id": 1, 00:29:46.096 "params": { 00:29:46.096 "admin_cmd_passthru": { 00:29:46.096 "identify_ctrlr": true 00:29:46.096 } 00:29:46.096 } 00:29:46.096 } 00:29:46.096 00:29:46.096 INFO: response: 00:29:46.096 { 00:29:46.096 "jsonrpc": "2.0", 00:29:46.096 "id": 1, 00:29:46.096 "result": true 00:29:46.096 } 00:29:46.096 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.096 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.096 INFO: Setting log level to 20 00:29:46.096 INFO: Setting log level to 20 00:29:46.096 INFO: Log level set to 20 00:29:46.096 INFO: Log level set to 20 00:29:46.096 INFO: Requests: 00:29:46.096 { 00:29:46.096 "jsonrpc": "2.0", 00:29:46.096 "method": "framework_start_init", 00:29:46.096 "id": 1 00:29:46.096 } 00:29:46.096 00:29:46.096 INFO: Requests: 00:29:46.096 { 00:29:46.096 "jsonrpc": "2.0", 00:29:46.096 "method": "framework_start_init", 00:29:46.096 "id": 1 00:29:46.096 } 00:29:46.096 00:29:46.096 [2024-06-10 10:55:10.315660] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:46.096 INFO: response: 00:29:46.096 { 00:29:46.096 "jsonrpc": "2.0", 00:29:46.096 "id": 1, 00:29:46.096 "result": true 00:29:46.096 } 00:29:46.096 00:29:46.096 INFO: response: 00:29:46.096 { 00:29:46.096 "jsonrpc": "2.0", 00:29:46.096 "id": 1, 00:29:46.096 "result": true 00:29:46.096 } 00:29:46.096 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.096 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.096 INFO: Setting log level to 40 00:29:46.096 INFO: Setting log level to 40 00:29:46.096 INFO: Setting log level to 40 00:29:46.096 [2024-06-10 10:55:10.328906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.096 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.096 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.096 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 Nvme0n1 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.677 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.677 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.677 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 [2024-06-10 10:55:10.716307] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:46.677 [2024-06-10 10:55:10.716565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.677 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.677 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.677 [ 00:29:46.677 { 00:29:46.677 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:46.677 "subtype": "Discovery", 00:29:46.677 "listen_addresses": [], 00:29:46.677 "allow_any_host": true, 00:29:46.677 "hosts": [] 00:29:46.677 }, 00:29:46.677 { 00:29:46.677 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:46.677 "subtype": "NVMe", 00:29:46.677 "listen_addresses": [ 00:29:46.677 { 00:29:46.677 "trtype": "TCP", 00:29:46.677 "adrfam": "IPv4", 00:29:46.677 "traddr": "10.0.0.2", 00:29:46.677 "trsvcid": "4420" 00:29:46.677 } 00:29:46.677 ], 00:29:46.677 "allow_any_host": true, 00:29:46.677 "hosts": [], 00:29:46.677 "serial_number": "SPDK00000000000001", 00:29:46.677 "model_number": "SPDK bdev Controller", 00:29:46.677 "max_namespaces": 1, 00:29:46.677 "min_cntlid": 1, 00:29:46.677 "max_cntlid": 65519, 00:29:46.677 "namespaces": [ 00:29:46.677 { 00:29:46.677 "nsid": 1, 00:29:46.677 "bdev_name": "Nvme0n1", 00:29:46.677 "name": "Nvme0n1", 00:29:46.677 "nguid": "36344730526054940025384500000027", 00:29:46.678 "uuid": "36344730-5260-5494-0025-384500000027" 00:29:46.678 } 00:29:46.678 ] 00:29:46.678 } 00:29:46.678 ] 00:29:46.678 10:55:10 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.678 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:46.678 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:46.678 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:46.678 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.678 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:29:46.678 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:46.678 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:46.678 10:55:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:46.678 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.938 10:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:29:46.938 10:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:29:46.938 10:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:29:46.938 10:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.938 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.938 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:46.938 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.938 10:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:46.938 10:55:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:46.938 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:46.938 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:46.938 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:46.938 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:46.938 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:46.938 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:46.938 rmmod nvme_tcp 00:29:46.938 rmmod nvme_fabrics 00:29:47.199 rmmod nvme_keyring 00:29:47.199 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:47.199 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:47.199 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:47.199 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1033844 ']' 00:29:47.199 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1033844 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 1033844 ']' 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 1033844 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1033844 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1033844' 00:29:47.199 killing process with pid 1033844 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 1033844 00:29:47.199 [2024-06-10 10:55:11.313879] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:47.199 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 1033844 00:29:47.459 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:47.459 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:47.459 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:47.459 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:47.459 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:47.459 10:55:11 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.459 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:47.459 10:55:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.369 10:55:13 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:49.369 00:29:49.369 real 0m12.862s 00:29:49.369 user 0m10.221s 00:29:49.369 sys 0m6.221s 00:29:49.369 10:55:13 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:49.369 10:55:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.369 ************************************ 00:29:49.369 END TEST nvmf_identify_passthru 00:29:49.369 ************************************ 00:29:49.630 10:55:13 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:49.630 10:55:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:49.630 10:55:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:49.630 10:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:49.630 ************************************ 00:29:49.630 START TEST nvmf_dif 00:29:49.630 ************************************ 00:29:49.630 10:55:13 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:49.630 * Looking for test storage... 00:29:49.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:49.630 10:55:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.630 10:55:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.630 10:55:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.630 10:55:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.630 10:55:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.630 10:55:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.630 10:55:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.630 10:55:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:49.630 10:55:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:49.630 10:55:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:49.630 10:55:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:49.630 10:55:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:49.630 10:55:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:49.630 10:55:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.630 10:55:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:49.630 10:55:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:49.630 10:55:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:49.630 10:55:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:57.770 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.770 10:55:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:57.771 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:57.771 Found net devices under 0000:31:00.0: cvl_0_0 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:57.771 Found net devices under 0000:31:00.1: cvl_0_1 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.771 10:55:20 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:57.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:29:57.771 00:29:57.771 --- 10.0.0.2 ping statistics --- 00:29:57.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.771 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:29:57.771 00:29:57.771 --- 10.0.0.1 ping statistics --- 00:29:57.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.771 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:57.771 10:55:21 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:00.317 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:00.317 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:00.317 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:00.578 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:00.578 10:55:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:00.578 10:55:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:00.578 10:55:24 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:00.578 10:55:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1039873 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1039873 00:30:00.578 10:55:24 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:00.578 10:55:24 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 1039873 ']' 00:30:00.578 10:55:24 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.578 10:55:24 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:00.578 10:55:24 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.578 10:55:24 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:00.578 10:55:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:00.578 [2024-06-10 10:55:24.792410] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:30:00.578 [2024-06-10 10:55:24.792475] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.578 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.578 [2024-06-10 10:55:24.863810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.838 [2024-06-10 10:55:24.937351] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.839 [2024-06-10 10:55:24.937387] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.839 [2024-06-10 10:55:24.937395] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.839 [2024-06-10 10:55:24.937401] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.839 [2024-06-10 10:55:24.937406] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.839 [2024-06-10 10:55:24.937431] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:30:01.409 10:55:25 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:01.409 10:55:25 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.409 10:55:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:01.409 10:55:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:01.409 [2024-06-10 10:55:25.596077] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.409 10:55:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:01.409 10:55:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:01.409 ************************************ 00:30:01.409 START TEST fio_dif_1_default 00:30:01.409 ************************************ 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:01.409 bdev_null0 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:01.409 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:01.409 [2024-06-10 10:55:25.684261] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:01.410 [2024-06-10 10:55:25.684458] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:01.410 { 00:30:01.410 "params": { 00:30:01.410 "name": "Nvme$subsystem", 00:30:01.410 "trtype": "$TEST_TRANSPORT", 00:30:01.410 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.410 "adrfam": "ipv4", 00:30:01.410 "trsvcid": "$NVMF_PORT", 00:30:01.410 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.410 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.410 "hdgst": ${hdgst:-false}, 00:30:01.410 "ddgst": ${ddgst:-false} 00:30:01.410 }, 00:30:01.410 "method": "bdev_nvme_attach_controller" 00:30:01.410 } 00:30:01.410 EOF 00:30:01.410 )") 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.410 10:55:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:01.671 "params": { 00:30:01.671 "name": "Nvme0", 00:30:01.671 "trtype": "tcp", 00:30:01.671 "traddr": "10.0.0.2", 00:30:01.671 "adrfam": "ipv4", 00:30:01.671 "trsvcid": "4420", 00:30:01.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:01.671 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:01.671 "hdgst": false, 00:30:01.671 "ddgst": false 00:30:01.671 }, 00:30:01.671 "method": "bdev_nvme_attach_controller" 00:30:01.671 }' 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:01.671 10:55:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:01.932 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:01.932 fio-3.35 00:30:01.932 Starting 1 thread 00:30:01.932 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.170 00:30:14.170 filename0: (groupid=0, jobs=1): err= 0: pid=1040399: Mon Jun 10 10:55:36 2024 00:30:14.170 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10039msec) 00:30:14.170 slat (nsec): min=5622, max=32534, avg=6433.30, stdev=1592.87 00:30:14.170 clat (usec): min=41843, max=42190, avg=41984.03, stdev=42.20 00:30:14.170 lat (usec): min=41849, max=42196, avg=41990.46, stdev=42.35 00:30:14.170 clat percentiles (usec): 00:30:14.170 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:14.170 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:14.170 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:14.170 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:14.170 | 99.99th=[42206] 00:30:14.170 bw ( KiB/s): min= 352, max= 384, per=99.76%, avg=380.80, stdev= 9.85, samples=20 00:30:14.170 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:14.170 lat (msec) : 50=100.00% 00:30:14.170 cpu : usr=95.96%, sys=3.84%, ctx=10, majf=0, minf=217 00:30:14.170 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:14.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.170 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:14.170 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:14.170 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:14.170 00:30:14.170 Run status group 0 (all jobs): 00:30:14.170 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10039-10039msec 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 00:30:14.170 real 0m11.237s 00:30:14.170 user 0m24.805s 00:30:14.170 sys 0m0.697s 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 ************************************ 00:30:14.170 END TEST fio_dif_1_default 00:30:14.170 ************************************ 00:30:14.170 10:55:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:14.170 10:55:36 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:14.170 10:55:36 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:14.170 10:55:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 ************************************ 00:30:14.170 START TEST fio_dif_1_multi_subsystems 00:30:14.170 ************************************ 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 bdev_null0 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 [2024-06-10 10:55:36.998921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 bdev_null1 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:14.170 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:14.171 { 00:30:14.171 "params": { 00:30:14.171 "name": "Nvme$subsystem", 00:30:14.171 "trtype": "$TEST_TRANSPORT", 00:30:14.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:14.171 "adrfam": "ipv4", 00:30:14.171 "trsvcid": "$NVMF_PORT", 00:30:14.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:14.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:14.171 "hdgst": ${hdgst:-false}, 00:30:14.171 "ddgst": ${ddgst:-false} 00:30:14.171 }, 00:30:14.171 "method": "bdev_nvme_attach_controller" 00:30:14.171 } 00:30:14.171 EOF 00:30:14.171 )") 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:14.171 { 00:30:14.171 "params": { 00:30:14.171 "name": "Nvme$subsystem", 00:30:14.171 "trtype": "$TEST_TRANSPORT", 00:30:14.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:14.171 "adrfam": "ipv4", 00:30:14.171 "trsvcid": "$NVMF_PORT", 00:30:14.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:14.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:14.171 "hdgst": ${hdgst:-false}, 00:30:14.171 "ddgst": ${ddgst:-false} 00:30:14.171 }, 00:30:14.171 "method": "bdev_nvme_attach_controller" 00:30:14.171 } 00:30:14.171 EOF 00:30:14.171 )") 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:14.171 "params": { 00:30:14.171 "name": "Nvme0", 00:30:14.171 "trtype": "tcp", 00:30:14.171 "traddr": "10.0.0.2", 00:30:14.171 "adrfam": "ipv4", 00:30:14.171 "trsvcid": "4420", 00:30:14.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:14.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:14.171 "hdgst": false, 00:30:14.171 "ddgst": false 00:30:14.171 }, 00:30:14.171 "method": "bdev_nvme_attach_controller" 00:30:14.171 },{ 00:30:14.171 "params": { 00:30:14.171 "name": "Nvme1", 00:30:14.171 "trtype": "tcp", 00:30:14.171 "traddr": "10.0.0.2", 00:30:14.171 "adrfam": "ipv4", 00:30:14.171 "trsvcid": "4420", 00:30:14.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:14.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:14.171 "hdgst": false, 00:30:14.171 "ddgst": false 00:30:14.171 }, 00:30:14.171 "method": "bdev_nvme_attach_controller" 00:30:14.171 }' 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:14.171 10:55:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:14.171 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:14.171 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:14.171 fio-3.35 00:30:14.171 Starting 2 threads 00:30:14.171 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.173 00:30:24.173 filename0: (groupid=0, jobs=1): err= 0: pid=1042838: Mon Jun 10 10:55:48 2024 00:30:24.173 read: IOPS=185, BW=742KiB/s (760kB/s)(7424KiB/10002msec) 00:30:24.173 slat (nsec): min=5625, max=32865, avg=6732.54, stdev=1966.49 00:30:24.173 clat (usec): min=724, max=42759, avg=21536.91, stdev=20267.15 00:30:24.173 lat (usec): min=729, max=42792, avg=21543.64, stdev=20267.06 00:30:24.173 clat percentiles (usec): 00:30:24.173 | 1.00th=[ 848], 5.00th=[ 1029], 10.00th=[ 1123], 20.00th=[ 1188], 00:30:24.173 | 30.00th=[ 1237], 40.00th=[ 1270], 50.00th=[41157], 60.00th=[41681], 00:30:24.173 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:24.173 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:24.173 | 99.99th=[42730] 00:30:24.173 bw ( KiB/s): min= 672, max= 768, per=66.22%, avg=742.74, stdev=33.01, samples=19 00:30:24.173 iops : min= 168, max= 192, avg=185.68, stdev= 8.25, samples=19 00:30:24.173 lat (usec) : 750=0.22%, 1000=4.15% 00:30:24.173 lat (msec) : 2=45.42%, 50=50.22% 00:30:24.173 cpu : usr=96.80%, sys=2.97%, ctx=21, majf=0, minf=181 00:30:24.173 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:24.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.173 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.173 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:24.173 filename1: (groupid=0, jobs=1): err= 0: pid=1042839: Mon Jun 10 10:55:48 2024 00:30:24.173 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:30:24.173 slat (nsec): min=5619, max=39852, avg=6847.76, stdev=2623.46 00:30:24.173 clat (usec): min=40840, max=43043, avg=41979.29, stdev=218.56 00:30:24.173 lat (usec): min=40846, max=43050, avg=41986.13, stdev=219.05 00:30:24.173 clat percentiles (usec): 00:30:24.173 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:30:24.173 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:24.173 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:24.173 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:30:24.173 | 99.99th=[43254] 00:30:24.173 bw ( KiB/s): min= 352, max= 384, per=33.91%, avg=380.80, stdev= 9.85, samples=20 00:30:24.173 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:30:24.173 lat (msec) : 50=100.00% 00:30:24.173 cpu : usr=96.89%, sys=2.89%, ctx=10, majf=0, minf=112 00:30:24.173 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:24.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.173 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.173 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:24.173 00:30:24.173 Run status group 0 (all jobs): 00:30:24.173 READ: bw=1121KiB/s (1147kB/s), 381KiB/s-742KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10002-10038msec 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.173 00:30:24.173 real 0m11.322s 00:30:24.173 user 0m32.819s 00:30:24.173 sys 0m0.952s 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:24.173 10:55:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:24.173 ************************************ 00:30:24.173 END TEST fio_dif_1_multi_subsystems 00:30:24.173 ************************************ 00:30:24.173 10:55:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:24.173 10:55:48 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:24.173 10:55:48 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:24.173 10:55:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:24.173 ************************************ 00:30:24.173 START TEST fio_dif_rand_params 00:30:24.173 ************************************ 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:24.173 bdev_null0 00:30:24.173 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:24.174 [2024-06-10 10:55:48.386634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:24.174 { 00:30:24.174 "params": { 00:30:24.174 "name": "Nvme$subsystem", 00:30:24.174 "trtype": "$TEST_TRANSPORT", 00:30:24.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.174 "adrfam": "ipv4", 00:30:24.174 "trsvcid": "$NVMF_PORT", 00:30:24.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.174 "hdgst": ${hdgst:-false}, 00:30:24.174 "ddgst": ${ddgst:-false} 00:30:24.174 }, 00:30:24.174 "method": "bdev_nvme_attach_controller" 00:30:24.174 } 00:30:24.174 EOF 00:30:24.174 )") 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:24.174 "params": { 00:30:24.174 "name": "Nvme0", 00:30:24.174 "trtype": "tcp", 00:30:24.174 "traddr": "10.0.0.2", 00:30:24.174 "adrfam": "ipv4", 00:30:24.174 "trsvcid": "4420", 00:30:24.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:24.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:24.174 "hdgst": false, 00:30:24.174 "ddgst": false 00:30:24.174 }, 00:30:24.174 "method": "bdev_nvme_attach_controller" 00:30:24.174 }' 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:24.174 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:24.454 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:24.454 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:24.455 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:24.455 10:55:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:24.716 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:24.716 ... 00:30:24.716 fio-3.35 00:30:24.716 Starting 3 threads 00:30:24.716 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.379 00:30:31.379 filename0: (groupid=0, jobs=1): err= 0: pid=1045111: Mon Jun 10 10:55:54 2024 00:30:31.379 read: IOPS=169, BW=21.2MiB/s (22.2MB/s)(107MiB/5045msec) 00:30:31.379 slat (nsec): min=5635, max=32093, avg=8156.51, stdev=1861.98 00:30:31.379 clat (usec): min=6253, max=92529, avg=17639.07, stdev=15028.91 00:30:31.379 lat (usec): min=6259, max=92538, avg=17647.22, stdev=15028.94 00:30:31.379 clat percentiles (usec): 00:30:31.379 | 1.00th=[ 6783], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9503], 00:30:31.379 | 30.00th=[10290], 40.00th=[11469], 50.00th=[12518], 60.00th=[13173], 00:30:31.379 | 70.00th=[14222], 80.00th=[15664], 90.00th=[51119], 95.00th=[53216], 00:30:31.379 | 99.00th=[55313], 99.50th=[56361], 99.90th=[92799], 99.95th=[92799], 00:30:31.379 | 99.99th=[92799] 00:30:31.379 bw ( KiB/s): min=14592, max=33024, per=27.35%, avg=21811.20, stdev=5779.78, samples=10 00:30:31.379 iops : min= 114, max= 258, avg=170.40, stdev=45.15, samples=10 00:30:31.379 lat (msec) : 10=26.32%, 20=59.53%, 50=1.64%, 100=12.51% 00:30:31.379 cpu : usr=96.33%, sys=3.41%, ctx=8, majf=0, minf=82 00:30:31.379 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.379 issued rwts: total=855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:31.379 filename0: (groupid=0, jobs=1): err= 0: pid=1045112: Mon Jun 10 10:55:54 2024 00:30:31.379 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(144MiB/5023msec) 00:30:31.379 slat (usec): min=5, max=111, avg=10.43, stdev= 3.70 00:30:31.379 clat (usec): min=4718, max=91538, avg=13109.40, stdev=13224.50 00:30:31.379 lat (usec): min=4729, max=91547, avg=13119.84, stdev=13224.44 00:30:31.379 clat percentiles (usec): 00:30:31.379 | 1.00th=[ 5145], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7177], 00:30:31.379 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8979], 60.00th=[ 9634], 00:30:31.379 | 70.00th=[10290], 80.00th=[10945], 90.00th=[47973], 95.00th=[50070], 00:30:31.379 | 99.00th=[52167], 99.50th=[53740], 99.90th=[90702], 99.95th=[91751], 00:30:31.379 | 99.99th=[91751] 00:30:31.379 bw ( KiB/s): min=19968, max=39424, per=36.76%, avg=29312.00, stdev=6179.75, samples=10 00:30:31.379 iops : min= 156, max= 308, avg=229.00, stdev=48.28, samples=10 00:30:31.379 lat (msec) : 10=66.64%, 20=22.91%, 50=5.92%, 100=4.53% 00:30:31.379 cpu : usr=94.84%, sys=4.08%, ctx=33, majf=0, minf=88 00:30:31.379 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.379 issued rwts: total=1148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:31.379 filename0: (groupid=0, jobs=1): err= 0: pid=1045113: Mon Jun 10 10:55:54 2024 00:30:31.379 read: IOPS=226, BW=28.3MiB/s (29.6MB/s)(143MiB/5043msec) 00:30:31.379 slat (nsec): min=5632, max=31409, avg=7978.62, stdev=1830.00 00:30:31.379 clat (usec): min=5004, max=92715, avg=13223.66, stdev=11889.15 00:30:31.379 lat (usec): min=5013, max=92724, avg=13231.64, stdev=11889.19 00:30:31.379 clat percentiles (usec): 00:30:31.379 | 1.00th=[ 5604], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 8029], 00:30:31.379 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[11076], 00:30:31.379 | 70.00th=[11863], 80.00th=[12780], 90.00th=[14222], 95.00th=[49546], 00:30:31.379 | 99.00th=[53740], 99.50th=[89654], 99.90th=[90702], 99.95th=[92799], 00:30:31.379 | 99.99th=[92799] 00:30:31.379 bw ( KiB/s): min=16640, max=37376, per=36.50%, avg=29107.20, stdev=5445.38, samples=10 00:30:31.379 iops : min= 130, max= 292, avg=227.40, stdev=42.54, samples=10 00:30:31.379 lat (msec) : 10=47.63%, 20=45.09%, 50=3.51%, 100=3.77% 00:30:31.379 cpu : usr=94.31%, sys=4.60%, ctx=399, majf=0, minf=97 00:30:31.379 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.379 issued rwts: total=1140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.379 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:31.379 00:30:31.379 Run status group 0 (all jobs): 00:30:31.379 READ: bw=77.9MiB/s (81.7MB/s), 21.2MiB/s-28.6MiB/s (22.2MB/s-30.0MB/s), io=393MiB (412MB), run=5023-5045msec 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 bdev_null0 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 [2024-06-10 10:55:54.580895] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 bdev_null1 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:31.379 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.380 bdev_null2 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.380 { 00:30:31.380 "params": { 00:30:31.380 "name": "Nvme$subsystem", 00:30:31.380 "trtype": "$TEST_TRANSPORT", 00:30:31.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.380 "adrfam": "ipv4", 00:30:31.380 "trsvcid": "$NVMF_PORT", 00:30:31.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.380 "hdgst": ${hdgst:-false}, 00:30:31.380 "ddgst": ${ddgst:-false} 00:30:31.380 }, 00:30:31.380 "method": "bdev_nvme_attach_controller" 00:30:31.380 } 00:30:31.380 EOF 00:30:31.380 )") 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.380 { 00:30:31.380 "params": { 00:30:31.380 "name": "Nvme$subsystem", 00:30:31.380 "trtype": "$TEST_TRANSPORT", 00:30:31.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.380 "adrfam": "ipv4", 00:30:31.380 "trsvcid": "$NVMF_PORT", 00:30:31.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.380 "hdgst": ${hdgst:-false}, 00:30:31.380 "ddgst": ${ddgst:-false} 00:30:31.380 }, 00:30:31.380 "method": "bdev_nvme_attach_controller" 00:30:31.380 } 00:30:31.380 EOF 00:30:31.380 )") 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.380 { 00:30:31.380 "params": { 00:30:31.380 "name": "Nvme$subsystem", 00:30:31.380 "trtype": "$TEST_TRANSPORT", 00:30:31.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.380 "adrfam": "ipv4", 00:30:31.380 "trsvcid": "$NVMF_PORT", 00:30:31.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.380 "hdgst": ${hdgst:-false}, 00:30:31.380 "ddgst": ${ddgst:-false} 00:30:31.380 }, 00:30:31.380 "method": "bdev_nvme_attach_controller" 00:30:31.380 } 00:30:31.380 EOF 00:30:31.380 )") 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:31.380 "params": { 00:30:31.380 "name": "Nvme0", 00:30:31.380 "trtype": "tcp", 00:30:31.380 "traddr": "10.0.0.2", 00:30:31.380 "adrfam": "ipv4", 00:30:31.380 "trsvcid": "4420", 00:30:31.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:31.380 "hdgst": false, 00:30:31.380 "ddgst": false 00:30:31.380 }, 00:30:31.380 "method": "bdev_nvme_attach_controller" 00:30:31.380 },{ 00:30:31.380 "params": { 00:30:31.380 "name": "Nvme1", 00:30:31.380 "trtype": "tcp", 00:30:31.380 "traddr": "10.0.0.2", 00:30:31.380 "adrfam": "ipv4", 00:30:31.380 "trsvcid": "4420", 00:30:31.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:31.380 "hdgst": false, 00:30:31.380 "ddgst": false 00:30:31.380 }, 00:30:31.380 "method": "bdev_nvme_attach_controller" 00:30:31.380 },{ 00:30:31.380 "params": { 00:30:31.380 "name": "Nvme2", 00:30:31.380 "trtype": "tcp", 00:30:31.380 "traddr": "10.0.0.2", 00:30:31.380 "adrfam": "ipv4", 00:30:31.380 "trsvcid": "4420", 00:30:31.380 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:31.380 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:31.380 "hdgst": false, 00:30:31.380 "ddgst": false 00:30:31.380 }, 00:30:31.380 "method": "bdev_nvme_attach_controller" 00:30:31.380 }' 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:31.380 10:55:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.380 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:31.380 ... 00:30:31.380 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:31.380 ... 00:30:31.380 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:31.380 ... 00:30:31.380 fio-3.35 00:30:31.380 Starting 24 threads 00:30:31.380 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.611 00:30:43.611 filename0: (groupid=0, jobs=1): err= 0: pid=1046620: Mon Jun 10 10:56:06 2024 00:30:43.611 read: IOPS=520, BW=2083KiB/s (2133kB/s)(20.5MiB/10054msec) 00:30:43.611 slat (nsec): min=5645, max=85749, avg=11390.26, stdev=9235.53 00:30:43.611 clat (usec): min=5319, max=60244, avg=30506.32, stdev=6103.77 00:30:43.611 lat (usec): min=5330, max=60255, avg=30517.71, stdev=6104.21 00:30:43.611 clat percentiles (usec): 00:30:43.611 | 1.00th=[ 6456], 5.00th=[17695], 10.00th=[23462], 20.00th=[29754], 00:30:43.611 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:43.611 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34341], 00:30:43.611 | 99.00th=[51119], 99.50th=[53740], 99.90th=[58459], 99.95th=[58983], 00:30:43.612 | 99.99th=[60031] 00:30:43.612 bw ( KiB/s): min= 1920, max= 2736, per=4.38%, avg=2093.60, stdev=179.24, samples=20 00:30:43.612 iops : min= 480, max= 684, avg=523.40, stdev=44.81, samples=20 00:30:43.612 lat (msec) : 10=1.57%, 20=6.17%, 50=91.12%, 100=1.15% 00:30:43.612 cpu : usr=98.96%, sys=0.72%, ctx=22, majf=0, minf=9 00:30:43.612 IO depths : 1=3.5%, 2=7.7%, 4=19.2%, 8=60.0%, 16=9.6%, 32=0.0%, >=64=0.0% 00:30:43.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 complete : 0=0.0%, 4=93.0%, 8=1.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 issued rwts: total=5236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.612 filename0: (groupid=0, jobs=1): err= 0: pid=1046621: Mon Jun 10 10:56:06 2024 00:30:43.612 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10008msec) 00:30:43.612 slat (usec): min=5, max=104, avg=18.88, stdev=16.76 00:30:43.612 clat (usec): min=20635, max=55607, avg=32226.04, stdev=2379.75 00:30:43.612 lat (usec): min=20640, max=55613, avg=32244.92, stdev=2378.47 00:30:43.612 clat percentiles (usec): 00:30:43.612 | 1.00th=[24249], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:30:43.612 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.612 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:30:43.612 | 99.00th=[42730], 99.50th=[47449], 99.90th=[55837], 99.95th=[55837], 00:30:43.612 | 99.99th=[55837] 00:30:43.612 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.89, stdev=77.69, samples=19 00:30:43.612 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:30:43.612 lat (msec) : 50=99.70%, 100=0.30% 00:30:43.612 cpu : usr=99.17%, sys=0.51%, ctx=12, majf=0, minf=9 00:30:43.612 IO depths : 1=5.5%, 2=10.9%, 4=22.8%, 8=53.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:30:43.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.612 filename0: (groupid=0, jobs=1): err= 0: pid=1046622: Mon Jun 10 10:56:06 2024 00:30:43.612 read: IOPS=508, BW=2036KiB/s (2084kB/s)(19.9MiB/10018msec) 00:30:43.612 slat (usec): min=5, max=109, avg=22.60, stdev=19.27 00:30:43.612 clat (usec): min=10535, max=54354, avg=31250.52, stdev=3675.89 00:30:43.612 lat (usec): min=10550, max=54390, avg=31273.11, stdev=3678.14 00:30:43.612 clat percentiles (usec): 00:30:43.612 | 1.00th=[19268], 5.00th=[22152], 10.00th=[28181], 20.00th=[30802], 00:30:43.612 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:30:43.612 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:30:43.612 | 99.00th=[37487], 99.50th=[47973], 99.90th=[53216], 99.95th=[54264], 00:30:43.612 | 99.99th=[54264] 00:30:43.612 bw ( KiB/s): min= 1920, max= 2448, per=4.26%, avg=2035.20, stdev=136.72, samples=20 00:30:43.612 iops : min= 480, max= 612, avg=508.80, stdev=34.18, samples=20 00:30:43.612 lat (msec) : 20=1.53%, 50=98.00%, 100=0.47% 00:30:43.612 cpu : usr=99.04%, sys=0.62%, ctx=14, majf=0, minf=9 00:30:43.612 IO depths : 1=5.3%, 2=10.6%, 4=21.9%, 8=54.8%, 16=7.4%, 32=0.0%, >=64=0.0% 00:30:43.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 issued rwts: total=5098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.612 filename0: (groupid=0, jobs=1): err= 0: pid=1046623: Mon Jun 10 10:56:06 2024 00:30:43.612 read: IOPS=495, BW=1981KiB/s (2028kB/s)(19.4MiB/10018msec) 00:30:43.612 slat (usec): min=5, max=103, avg=23.31, stdev=17.42 00:30:43.612 clat (usec): min=10601, max=61952, avg=32097.73, stdev=3406.40 00:30:43.612 lat (usec): min=10610, max=61976, avg=32121.04, stdev=3406.26 00:30:43.612 clat percentiles (usec): 00:30:43.612 | 1.00th=[18220], 5.00th=[29754], 10.00th=[30802], 20.00th=[31327], 00:30:43.612 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.612 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34341], 00:30:43.612 | 99.00th=[46924], 99.50th=[51643], 99.90th=[61604], 99.95th=[61604], 00:30:43.612 | 99.99th=[62129] 00:30:43.612 bw ( KiB/s): min= 1792, max= 2112, per=4.14%, avg=1981.05, stdev=78.53, samples=19 00:30:43.612 iops : min= 448, max= 528, avg=495.26, stdev=19.63, samples=19 00:30:43.612 lat (msec) : 20=1.47%, 50=97.94%, 100=0.58% 00:30:43.612 cpu : usr=99.15%, sys=0.52%, ctx=14, majf=0, minf=9 00:30:43.612 IO depths : 1=4.8%, 2=9.6%, 4=21.2%, 8=56.2%, 16=8.2%, 32=0.0%, >=64=0.0% 00:30:43.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 complete : 0=0.0%, 4=93.1%, 8=1.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 issued rwts: total=4961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.612 filename0: (groupid=0, jobs=1): err= 0: pid=1046624: Mon Jun 10 10:56:06 2024 00:30:43.612 read: IOPS=508, BW=2033KiB/s (2081kB/s)(19.9MiB/10017msec) 00:30:43.612 slat (usec): min=5, max=104, avg=12.83, stdev= 9.51 00:30:43.612 clat (usec): min=6626, max=58628, avg=31383.20, stdev=3799.65 00:30:43.612 lat (usec): min=6643, max=58637, avg=31396.03, stdev=3799.76 00:30:43.612 clat percentiles (usec): 00:30:43.612 | 1.00th=[12780], 5.00th=[23725], 10.00th=[30278], 20.00th=[31065], 00:30:43.612 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.612 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:30:43.612 | 99.00th=[36439], 99.50th=[38536], 99.90th=[52691], 99.95th=[52691], 00:30:43.612 | 99.99th=[58459] 00:30:43.612 bw ( KiB/s): min= 1920, max= 2709, per=4.24%, avg=2029.85, stdev=176.31, samples=20 00:30:43.612 iops : min= 480, max= 677, avg=507.45, stdev=44.03, samples=20 00:30:43.612 lat (msec) : 10=0.63%, 20=2.08%, 50=97.17%, 100=0.12% 00:30:43.612 cpu : usr=99.09%, sys=0.58%, ctx=14, majf=0, minf=9 00:30:43.612 IO depths : 1=5.4%, 2=11.2%, 4=23.6%, 8=52.7%, 16=7.2%, 32=0.0%, >=64=0.0% 00:30:43.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 issued rwts: total=5090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.612 filename0: (groupid=0, jobs=1): err= 0: pid=1046625: Mon Jun 10 10:56:06 2024 00:30:43.612 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.6MiB/10018msec) 00:30:43.612 slat (nsec): min=5637, max=95060, avg=15952.49, stdev=13743.29 00:30:43.612 clat (usec): min=9614, max=58122, avg=31743.69, stdev=5852.76 00:30:43.612 lat (usec): min=9625, max=58128, avg=31759.64, stdev=5853.95 00:30:43.612 clat percentiles (usec): 00:30:43.612 | 1.00th=[16057], 5.00th=[20579], 10.00th=[23725], 20.00th=[30016], 00:30:43.612 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32375], 00:30:43.612 | 70.00th=[32900], 80.00th=[33424], 90.00th=[37487], 95.00th=[41681], 00:30:43.612 | 99.00th=[51643], 99.50th=[52691], 99.90th=[55313], 99.95th=[57934], 00:30:43.612 | 99.99th=[57934] 00:30:43.612 bw ( KiB/s): min= 1795, max= 2160, per=4.20%, avg=2009.42, stdev=89.37, samples=19 00:30:43.612 iops : min= 448, max= 540, avg=502.32, stdev=22.44, samples=19 00:30:43.612 lat (msec) : 10=0.08%, 20=3.90%, 50=94.19%, 100=1.83% 00:30:43.612 cpu : usr=99.11%, sys=0.56%, ctx=14, majf=0, minf=9 00:30:43.612 IO depths : 1=1.9%, 2=4.1%, 4=12.5%, 8=69.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:43.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 complete : 0=0.0%, 4=91.0%, 8=4.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 issued rwts: total=5028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.612 filename0: (groupid=0, jobs=1): err= 0: pid=1046626: Mon Jun 10 10:56:06 2024 00:30:43.612 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.5MiB/10007msec) 00:30:43.612 slat (usec): min=5, max=100, avg=18.52, stdev=16.02 00:30:43.612 clat (usec): min=7829, max=58983, avg=31969.35, stdev=4756.26 00:30:43.612 lat (usec): min=7835, max=59001, avg=31987.87, stdev=4756.79 00:30:43.612 clat percentiles (usec): 00:30:43.612 | 1.00th=[19530], 5.00th=[23462], 10.00th=[26870], 20.00th=[30802], 00:30:43.612 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.612 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34866], 95.00th=[39584], 00:30:43.612 | 99.00th=[52167], 99.50th=[54264], 99.90th=[58983], 99.95th=[58983], 00:30:43.612 | 99.99th=[58983] 00:30:43.612 bw ( KiB/s): min= 1715, max= 2160, per=4.15%, avg=1984.16, stdev=117.80, samples=19 00:30:43.612 iops : min= 428, max= 540, avg=496.00, stdev=29.54, samples=19 00:30:43.612 lat (msec) : 10=0.06%, 20=1.30%, 50=97.59%, 100=1.04% 00:30:43.612 cpu : usr=99.05%, sys=0.63%, ctx=13, majf=0, minf=9 00:30:43.612 IO depths : 1=2.4%, 2=5.2%, 4=14.4%, 8=66.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:43.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 complete : 0=0.0%, 4=91.7%, 8=4.2%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.612 filename0: (groupid=0, jobs=1): err= 0: pid=1046627: Mon Jun 10 10:56:06 2024 00:30:43.612 read: IOPS=499, BW=1997KiB/s (2045kB/s)(19.5MiB/10008msec) 00:30:43.612 slat (usec): min=5, max=100, avg=20.05, stdev=16.28 00:30:43.612 clat (usec): min=10103, max=68580, avg=31884.20, stdev=5399.28 00:30:43.612 lat (usec): min=10109, max=68597, avg=31904.26, stdev=5399.88 00:30:43.612 clat percentiles (usec): 00:30:43.612 | 1.00th=[15926], 5.00th=[21627], 10.00th=[26346], 20.00th=[30802], 00:30:43.612 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32375], 00:30:43.612 | 70.00th=[32637], 80.00th=[33424], 90.00th=[35390], 95.00th=[40633], 00:30:43.612 | 99.00th=[51119], 99.50th=[53216], 99.90th=[68682], 99.95th=[68682], 00:30:43.612 | 99.99th=[68682] 00:30:43.612 bw ( KiB/s): min= 1792, max= 2288, per=4.14%, avg=1981.89, stdev=111.39, samples=19 00:30:43.612 iops : min= 448, max= 572, avg=495.47, stdev=27.85, samples=19 00:30:43.612 lat (msec) : 20=3.46%, 50=95.30%, 100=1.24% 00:30:43.612 cpu : usr=99.03%, sys=0.64%, ctx=14, majf=0, minf=9 00:30:43.612 IO depths : 1=3.0%, 2=6.0%, 4=15.2%, 8=64.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:30:43.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 complete : 0=0.0%, 4=91.7%, 8=4.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.612 issued rwts: total=4997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.613 filename1: (groupid=0, jobs=1): err= 0: pid=1046628: Mon Jun 10 10:56:06 2024 00:30:43.613 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.3MiB/10012msec) 00:30:43.613 slat (usec): min=5, max=102, avg=23.99, stdev=18.97 00:30:43.613 clat (usec): min=11257, max=58904, avg=32258.05, stdev=4341.45 00:30:43.613 lat (usec): min=11264, max=58921, avg=32282.04, stdev=4341.47 00:30:43.613 clat percentiles (usec): 00:30:43.613 | 1.00th=[19792], 5.00th=[25560], 10.00th=[30016], 20.00th=[31065], 00:30:43.613 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.613 | 70.00th=[32637], 80.00th=[33424], 90.00th=[34341], 95.00th=[39060], 00:30:43.613 | 99.00th=[51119], 99.50th=[52691], 99.90th=[58983], 99.95th=[58983], 00:30:43.613 | 99.99th=[58983] 00:30:43.613 bw ( KiB/s): min= 1792, max= 2208, per=4.11%, avg=1965.47, stdev=110.63, samples=19 00:30:43.613 iops : min= 448, max= 552, avg=491.37, stdev=27.66, samples=19 00:30:43.613 lat (msec) : 20=1.14%, 50=97.75%, 100=1.12% 00:30:43.613 cpu : usr=97.65%, sys=1.23%, ctx=79, majf=0, minf=9 00:30:43.613 IO depths : 1=4.1%, 2=8.2%, 4=17.7%, 8=60.7%, 16=9.4%, 32=0.0%, >=64=0.0% 00:30:43.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 complete : 0=0.0%, 4=92.3%, 8=3.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 issued rwts: total=4932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.613 filename1: (groupid=0, jobs=1): err= 0: pid=1046629: Mon Jun 10 10:56:06 2024 00:30:43.613 read: IOPS=515, BW=2060KiB/s (2110kB/s)(20.1MiB/10003msec) 00:30:43.613 slat (usec): min=5, max=112, avg=24.21, stdev=20.02 00:30:43.613 clat (usec): min=12042, max=53163, avg=30863.64, stdev=4467.89 00:30:43.613 lat (usec): min=12048, max=53181, avg=30887.86, stdev=4471.56 00:30:43.613 clat percentiles (usec): 00:30:43.613 | 1.00th=[17957], 5.00th=[21365], 10.00th=[24249], 20.00th=[29492], 00:30:43.613 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31851], 60.00th=[32113], 00:30:43.613 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[34866], 00:30:43.613 | 99.00th=[43779], 99.50th=[49021], 99.90th=[51643], 99.95th=[53216], 00:30:43.613 | 99.99th=[53216] 00:30:43.613 bw ( KiB/s): min= 1795, max= 2368, per=4.32%, avg=2068.37, stdev=151.75, samples=19 00:30:43.613 iops : min= 448, max= 592, avg=517.05, stdev=38.01, samples=19 00:30:43.613 lat (msec) : 20=2.50%, 50=97.09%, 100=0.41% 00:30:43.613 cpu : usr=99.06%, sys=0.63%, ctx=14, majf=0, minf=9 00:30:43.613 IO depths : 1=3.6%, 2=7.3%, 4=16.6%, 8=62.8%, 16=9.8%, 32=0.0%, >=64=0.0% 00:30:43.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 complete : 0=0.0%, 4=91.9%, 8=3.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.613 filename1: (groupid=0, jobs=1): err= 0: pid=1046630: Mon Jun 10 10:56:06 2024 00:30:43.613 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10007msec) 00:30:43.613 slat (nsec): min=5681, max=99683, avg=22669.27, stdev=15855.47 00:30:43.613 clat (usec): min=7162, max=54872, avg=32302.55, stdev=4517.65 00:30:43.613 lat (usec): min=7169, max=54884, avg=32325.22, stdev=4517.94 00:30:43.613 clat percentiles (usec): 00:30:43.613 | 1.00th=[17433], 5.00th=[25822], 10.00th=[30540], 20.00th=[31327], 00:30:43.613 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.613 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[39060], 00:30:43.613 | 99.00th=[51119], 99.50th=[53740], 99.90th=[53740], 99.95th=[54789], 00:30:43.613 | 99.99th=[54789] 00:30:43.613 bw ( KiB/s): min= 1747, max= 2128, per=4.11%, avg=1963.11, stdev=90.75, samples=19 00:30:43.613 iops : min= 436, max= 532, avg=490.74, stdev=22.79, samples=19 00:30:43.613 lat (msec) : 10=0.08%, 20=1.87%, 50=96.47%, 100=1.58% 00:30:43.613 cpu : usr=99.36%, sys=0.34%, ctx=15, majf=0, minf=9 00:30:43.613 IO depths : 1=3.7%, 2=8.4%, 4=20.4%, 8=58.2%, 16=9.3%, 32=0.0%, >=64=0.0% 00:30:43.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 complete : 0=0.0%, 4=93.0%, 8=1.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.613 filename1: (groupid=0, jobs=1): err= 0: pid=1046631: Mon Jun 10 10:56:06 2024 00:30:43.613 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.3MiB/10027msec) 00:30:43.613 slat (nsec): min=5619, max=90870, avg=11713.22, stdev=9640.39 00:30:43.613 clat (usec): min=6198, max=57499, avg=30823.53, stdev=5023.36 00:30:43.613 lat (usec): min=6215, max=57508, avg=30835.24, stdev=5024.09 00:30:43.613 clat percentiles (usec): 00:30:43.613 | 1.00th=[10159], 5.00th=[19792], 10.00th=[25035], 20.00th=[30540], 00:30:43.613 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32375], 00:30:43.613 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34341], 00:30:43.613 | 99.00th=[42206], 99.50th=[45351], 99.90th=[57410], 99.95th=[57410], 00:30:43.613 | 99.99th=[57410] 00:30:43.613 bw ( KiB/s): min= 1920, max= 2533, per=4.33%, avg=2069.85, stdev=149.22, samples=20 00:30:43.613 iops : min= 480, max= 633, avg=517.45, stdev=37.26, samples=20 00:30:43.613 lat (msec) : 10=0.79%, 20=4.80%, 50=94.14%, 100=0.27% 00:30:43.613 cpu : usr=98.99%, sys=0.69%, ctx=13, majf=0, minf=9 00:30:43.613 IO depths : 1=3.8%, 2=7.6%, 4=17.6%, 8=61.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:30:43.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 complete : 0=0.0%, 4=92.2%, 8=2.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.613 filename1: (groupid=0, jobs=1): err= 0: pid=1046632: Mon Jun 10 10:56:06 2024 00:30:43.613 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10012msec) 00:30:43.613 slat (usec): min=5, max=125, avg=23.96, stdev=17.04 00:30:43.613 clat (usec): min=17824, max=50833, avg=31769.51, stdev=2532.96 00:30:43.613 lat (usec): min=17846, max=50840, avg=31793.47, stdev=2534.31 00:30:43.613 clat percentiles (usec): 00:30:43.613 | 1.00th=[21627], 5.00th=[27657], 10.00th=[30540], 20.00th=[31327], 00:30:43.613 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.613 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:30:43.613 | 99.00th=[37487], 99.50th=[42206], 99.90th=[45876], 99.95th=[50594], 00:30:43.613 | 99.99th=[50594] 00:30:43.613 bw ( KiB/s): min= 1920, max= 2192, per=4.19%, avg=2002.26, stdev=78.95, samples=19 00:30:43.613 iops : min= 480, max= 548, avg=500.53, stdev=19.71, samples=19 00:30:43.613 lat (msec) : 20=0.20%, 50=99.72%, 100=0.08% 00:30:43.613 cpu : usr=99.26%, sys=0.43%, ctx=15, majf=0, minf=9 00:30:43.613 IO depths : 1=5.7%, 2=11.4%, 4=23.2%, 8=52.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:30:43.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 issued rwts: total=5012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.613 filename1: (groupid=0, jobs=1): err= 0: pid=1046633: Mon Jun 10 10:56:06 2024 00:30:43.613 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10011msec) 00:30:43.613 slat (usec): min=5, max=113, avg=24.19, stdev=18.88 00:30:43.613 clat (usec): min=18472, max=69750, avg=32080.89, stdev=4794.15 00:30:43.613 lat (usec): min=18480, max=69765, avg=32105.08, stdev=4794.44 00:30:43.613 clat percentiles (usec): 00:30:43.613 | 1.00th=[19792], 5.00th=[24249], 10.00th=[28181], 20.00th=[30802], 00:30:43.613 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32375], 00:30:43.613 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[39584], 00:30:43.613 | 99.00th=[50594], 99.50th=[52691], 99.90th=[69731], 99.95th=[69731], 00:30:43.613 | 99.99th=[69731] 00:30:43.613 bw ( KiB/s): min= 1664, max= 2240, per=4.16%, avg=1987.37, stdev=121.36, samples=19 00:30:43.613 iops : min= 416, max= 560, avg=496.84, stdev=30.34, samples=19 00:30:43.613 lat (msec) : 20=1.39%, 50=97.52%, 100=1.09% 00:30:43.613 cpu : usr=99.25%, sys=0.44%, ctx=15, majf=0, minf=9 00:30:43.613 IO depths : 1=4.2%, 2=8.4%, 4=19.3%, 8=59.2%, 16=8.9%, 32=0.0%, >=64=0.0% 00:30:43.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 complete : 0=0.0%, 4=92.7%, 8=2.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.613 filename1: (groupid=0, jobs=1): err= 0: pid=1046634: Mon Jun 10 10:56:06 2024 00:30:43.613 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10006msec) 00:30:43.613 slat (usec): min=5, max=113, avg=22.55, stdev=19.38 00:30:43.613 clat (usec): min=8347, max=54356, avg=31979.19, stdev=5028.56 00:30:43.613 lat (usec): min=8353, max=54363, avg=32001.74, stdev=5029.04 00:30:43.613 clat percentiles (usec): 00:30:43.613 | 1.00th=[18482], 5.00th=[22152], 10.00th=[27132], 20.00th=[30802], 00:30:43.613 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32375], 00:30:43.613 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34866], 95.00th=[41157], 00:30:43.613 | 99.00th=[50594], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:30:43.613 | 99.99th=[54264] 00:30:43.613 bw ( KiB/s): min= 1792, max= 2192, per=4.13%, avg=1977.26, stdev=101.83, samples=19 00:30:43.613 iops : min= 448, max= 548, avg=494.32, stdev=25.46, samples=19 00:30:43.613 lat (msec) : 10=0.12%, 20=2.55%, 50=96.14%, 100=1.19% 00:30:43.613 cpu : usr=99.15%, sys=0.47%, ctx=73, majf=0, minf=9 00:30:43.613 IO depths : 1=3.1%, 2=6.3%, 4=16.0%, 8=63.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:30:43.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 complete : 0=0.0%, 4=91.9%, 8=3.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.613 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.613 filename1: (groupid=0, jobs=1): err= 0: pid=1046635: Mon Jun 10 10:56:06 2024 00:30:43.613 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10026msec) 00:30:43.613 slat (usec): min=5, max=102, avg=20.93, stdev=16.40 00:30:43.613 clat (usec): min=6501, max=50179, avg=31906.90, stdev=2730.04 00:30:43.613 lat (usec): min=6520, max=50188, avg=31927.83, stdev=2730.39 00:30:43.613 clat percentiles (usec): 00:30:43.613 | 1.00th=[20317], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:30:43.613 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.613 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:30:43.613 | 99.00th=[35390], 99.50th=[43779], 99.90th=[46400], 99.95th=[46400], 00:30:43.614 | 99.99th=[50070] 00:30:43.614 bw ( KiB/s): min= 1920, max= 2224, per=4.17%, avg=1992.60, stdev=82.96, samples=20 00:30:43.614 iops : min= 480, max= 556, avg=498.15, stdev=20.74, samples=20 00:30:43.614 lat (msec) : 10=0.64%, 20=0.24%, 50=99.08%, 100=0.04% 00:30:43.614 cpu : usr=98.72%, sys=0.71%, ctx=201, majf=0, minf=9 00:30:43.614 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:30:43.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 issued rwts: total=4998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.614 filename2: (groupid=0, jobs=1): err= 0: pid=1046636: Mon Jun 10 10:56:06 2024 00:30:43.614 read: IOPS=514, BW=2057KiB/s (2106kB/s)(20.1MiB/10013msec) 00:30:43.614 slat (usec): min=5, max=110, avg=14.24, stdev=13.00 00:30:43.614 clat (usec): min=9870, max=65563, avg=31030.93, stdev=6427.74 00:30:43.614 lat (usec): min=9877, max=65579, avg=31045.17, stdev=6428.77 00:30:43.614 clat percentiles (usec): 00:30:43.614 | 1.00th=[15533], 5.00th=[20055], 10.00th=[21890], 20.00th=[26608], 00:30:43.614 | 30.00th=[30278], 40.00th=[31327], 50.00th=[31851], 60.00th=[32375], 00:30:43.614 | 70.00th=[32900], 80.00th=[33424], 90.00th=[37487], 95.00th=[41157], 00:30:43.614 | 99.00th=[51643], 99.50th=[54789], 99.90th=[65274], 99.95th=[65799], 00:30:43.614 | 99.99th=[65799] 00:30:43.614 bw ( KiB/s): min= 1792, max= 2240, per=4.30%, avg=2054.74, stdev=110.22, samples=19 00:30:43.614 iops : min= 448, max= 560, avg=513.68, stdev=27.55, samples=19 00:30:43.614 lat (msec) : 10=0.02%, 20=4.84%, 50=93.41%, 100=1.73% 00:30:43.614 cpu : usr=99.11%, sys=0.55%, ctx=13, majf=0, minf=9 00:30:43.614 IO depths : 1=0.8%, 2=1.6%, 4=7.3%, 8=76.4%, 16=14.0%, 32=0.0%, >=64=0.0% 00:30:43.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 complete : 0=0.0%, 4=89.8%, 8=6.8%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 issued rwts: total=5148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.614 filename2: (groupid=0, jobs=1): err= 0: pid=1046637: Mon Jun 10 10:56:06 2024 00:30:43.614 read: IOPS=490, BW=1962KiB/s (2010kB/s)(19.2MiB/10006msec) 00:30:43.614 slat (usec): min=5, max=117, avg=17.73, stdev=15.94 00:30:43.614 clat (usec): min=7274, max=67670, avg=32507.21, stdev=5990.84 00:30:43.614 lat (usec): min=7282, max=67692, avg=32524.94, stdev=5990.70 00:30:43.614 clat percentiles (usec): 00:30:43.614 | 1.00th=[17957], 5.00th=[22676], 10.00th=[26084], 20.00th=[30540], 00:30:43.614 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:30:43.614 | 70.00th=[33162], 80.00th=[34341], 90.00th=[38536], 95.00th=[43254], 00:30:43.614 | 99.00th=[54789], 99.50th=[55313], 99.90th=[67634], 99.95th=[67634], 00:30:43.614 | 99.99th=[67634] 00:30:43.614 bw ( KiB/s): min= 1616, max= 2112, per=4.07%, avg=1944.00, stdev=124.51, samples=19 00:30:43.614 iops : min= 404, max= 528, avg=486.00, stdev=31.13, samples=19 00:30:43.614 lat (msec) : 10=0.14%, 20=1.96%, 50=95.05%, 100=2.85% 00:30:43.614 cpu : usr=99.30%, sys=0.37%, ctx=15, majf=0, minf=9 00:30:43.614 IO depths : 1=1.3%, 2=2.6%, 4=8.9%, 8=73.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:30:43.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 complete : 0=0.0%, 4=90.3%, 8=6.3%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 issued rwts: total=4909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.614 filename2: (groupid=0, jobs=1): err= 0: pid=1046638: Mon Jun 10 10:56:06 2024 00:30:43.614 read: IOPS=507, BW=2032KiB/s (2081kB/s)(19.9MiB/10012msec) 00:30:43.614 slat (usec): min=5, max=115, avg=15.33, stdev=13.25 00:30:43.614 clat (usec): min=10630, max=56802, avg=31378.66, stdev=3535.36 00:30:43.614 lat (usec): min=10639, max=56809, avg=31394.00, stdev=3536.59 00:30:43.614 clat percentiles (usec): 00:30:43.614 | 1.00th=[19268], 5.00th=[22414], 10.00th=[28705], 20.00th=[31065], 00:30:43.614 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.614 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:30:43.614 | 99.00th=[39584], 99.50th=[42206], 99.90th=[54264], 99.95th=[56886], 00:30:43.614 | 99.99th=[56886] 00:30:43.614 bw ( KiB/s): min= 1920, max= 2491, per=4.25%, avg=2033.63, stdev=151.99, samples=19 00:30:43.614 iops : min= 480, max= 622, avg=508.37, stdev=37.87, samples=19 00:30:43.614 lat (msec) : 20=1.57%, 50=98.31%, 100=0.12% 00:30:43.614 cpu : usr=99.09%, sys=0.56%, ctx=33, majf=0, minf=9 00:30:43.614 IO depths : 1=4.1%, 2=8.3%, 4=18.7%, 8=60.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:30:43.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 issued rwts: total=5086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.614 filename2: (groupid=0, jobs=1): err= 0: pid=1046639: Mon Jun 10 10:56:06 2024 00:30:43.614 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10016msec) 00:30:43.614 slat (usec): min=5, max=124, avg=30.89, stdev=21.39 00:30:43.614 clat (usec): min=22163, max=54936, avg=32147.69, stdev=1662.71 00:30:43.614 lat (usec): min=22172, max=54958, avg=32178.58, stdev=1659.66 00:30:43.614 clat percentiles (usec): 00:30:43.614 | 1.00th=[29492], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:30:43.614 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.614 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:30:43.614 | 99.00th=[34866], 99.50th=[34866], 99.90th=[54789], 99.95th=[54789], 00:30:43.614 | 99.99th=[54789] 00:30:43.614 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.89, stdev=77.69, samples=19 00:30:43.614 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:30:43.614 lat (msec) : 50=99.68%, 100=0.32% 00:30:43.614 cpu : usr=99.07%, sys=0.61%, ctx=13, majf=0, minf=11 00:30:43.614 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:43.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.614 filename2: (groupid=0, jobs=1): err= 0: pid=1046640: Mon Jun 10 10:56:06 2024 00:30:43.614 read: IOPS=496, BW=1987KiB/s (2034kB/s)(19.4MiB/10019msec) 00:30:43.614 slat (nsec): min=5538, max=77578, avg=12667.84, stdev=8884.50 00:30:43.614 clat (usec): min=18745, max=56368, avg=32109.93, stdev=2040.08 00:30:43.614 lat (usec): min=18751, max=56384, avg=32122.60, stdev=2039.76 00:30:43.614 clat percentiles (usec): 00:30:43.614 | 1.00th=[23462], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:30:43.614 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.614 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:30:43.614 | 99.00th=[34341], 99.50th=[34866], 99.90th=[56361], 99.95th=[56361], 00:30:43.614 | 99.99th=[56361] 00:30:43.614 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1987.37, stdev=78.31, samples=19 00:30:43.614 iops : min= 448, max= 512, avg=496.84, stdev=19.58, samples=19 00:30:43.614 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:30:43.614 cpu : usr=99.17%, sys=0.51%, ctx=12, majf=0, minf=9 00:30:43.614 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:43.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.614 filename2: (groupid=0, jobs=1): err= 0: pid=1046641: Mon Jun 10 10:56:06 2024 00:30:43.614 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.4MiB/10003msec) 00:30:43.614 slat (usec): min=5, max=113, avg=22.46, stdev=18.21 00:30:43.614 clat (usec): min=10717, max=56252, avg=31979.35, stdev=3761.55 00:30:43.614 lat (usec): min=10723, max=56262, avg=32001.81, stdev=3761.69 00:30:43.614 clat percentiles (usec): 00:30:43.614 | 1.00th=[19530], 5.00th=[26608], 10.00th=[30278], 20.00th=[31065], 00:30:43.614 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:30:43.614 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:30:43.614 | 99.00th=[50594], 99.50th=[52691], 99.90th=[55837], 99.95th=[56361], 00:30:43.614 | 99.99th=[56361] 00:30:43.614 bw ( KiB/s): min= 1795, max= 2192, per=4.17%, avg=1994.26, stdev=92.35, samples=19 00:30:43.614 iops : min= 448, max= 548, avg=498.53, stdev=23.18, samples=19 00:30:43.614 lat (msec) : 20=1.19%, 50=97.67%, 100=1.15% 00:30:43.614 cpu : usr=99.09%, sys=0.58%, ctx=15, majf=0, minf=9 00:30:43.614 IO depths : 1=4.8%, 2=9.6%, 4=20.4%, 8=57.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:30:43.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 complete : 0=0.0%, 4=92.9%, 8=1.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.614 filename2: (groupid=0, jobs=1): err= 0: pid=1046642: Mon Jun 10 10:56:06 2024 00:30:43.614 read: IOPS=490, BW=1961KiB/s (2008kB/s)(19.2MiB/10007msec) 00:30:43.614 slat (usec): min=5, max=100, avg=18.58, stdev=15.45 00:30:43.614 clat (usec): min=7228, max=58996, avg=32493.18, stdev=4481.93 00:30:43.614 lat (usec): min=7235, max=59013, avg=32511.76, stdev=4481.92 00:30:43.614 clat percentiles (usec): 00:30:43.614 | 1.00th=[17957], 5.00th=[26084], 10.00th=[30802], 20.00th=[31327], 00:30:43.614 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32375], 60.00th=[32637], 00:30:43.614 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[39060], 00:30:43.614 | 99.00th=[50594], 99.50th=[54264], 99.90th=[58983], 99.95th=[58983], 00:30:43.614 | 99.99th=[58983] 00:30:43.614 bw ( KiB/s): min= 1715, max= 2048, per=4.09%, avg=1958.05, stdev=78.45, samples=19 00:30:43.614 iops : min= 428, max= 512, avg=489.47, stdev=19.74, samples=19 00:30:43.614 lat (msec) : 10=0.04%, 20=1.32%, 50=97.21%, 100=1.43% 00:30:43.614 cpu : usr=99.22%, sys=0.46%, ctx=13, majf=0, minf=9 00:30:43.614 IO depths : 1=2.7%, 2=6.9%, 4=18.9%, 8=61.4%, 16=10.2%, 32=0.0%, >=64=0.0% 00:30:43.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.614 issued rwts: total=4906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.614 filename2: (groupid=0, jobs=1): err= 0: pid=1046643: Mon Jun 10 10:56:06 2024 00:30:43.615 read: IOPS=475, BW=1901KiB/s (1947kB/s)(18.6MiB/10006msec) 00:30:43.615 slat (usec): min=5, max=109, avg=17.16, stdev=15.74 00:30:43.615 clat (usec): min=11350, max=56364, avg=33548.52, stdev=7057.02 00:30:43.615 lat (usec): min=11373, max=56370, avg=33565.68, stdev=7057.47 00:30:43.615 clat percentiles (usec): 00:30:43.615 | 1.00th=[18744], 5.00th=[20579], 10.00th=[25297], 20.00th=[30802], 00:30:43.615 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32637], 60.00th=[33162], 00:30:43.615 | 70.00th=[34341], 80.00th=[38011], 90.00th=[43254], 95.00th=[49546], 00:30:43.615 | 99.00th=[53740], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:30:43.615 | 99.99th=[56361] 00:30:43.615 bw ( KiB/s): min= 1440, max= 2208, per=3.95%, avg=1888.84, stdev=240.80, samples=19 00:30:43.615 iops : min= 360, max= 552, avg=472.21, stdev=60.20, samples=19 00:30:43.615 lat (msec) : 20=3.97%, 50=91.15%, 100=4.88% 00:30:43.615 cpu : usr=99.06%, sys=0.62%, ctx=13, majf=0, minf=9 00:30:43.615 IO depths : 1=0.4%, 2=2.6%, 4=14.7%, 8=69.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:43.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.615 complete : 0=0.0%, 4=92.1%, 8=3.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.615 issued rwts: total=4756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:43.615 00:30:43.615 Run status group 0 (all jobs): 00:30:43.615 READ: bw=46.7MiB/s (49.0MB/s), 1901KiB/s-2083KiB/s (1947kB/s-2133kB/s), io=469MiB (492MB), run=10003-10054msec 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 bdev_null0 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 [2024-06-10 10:56:06.380537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 bdev_null1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.615 { 00:30:43.615 "params": { 00:30:43.615 "name": "Nvme$subsystem", 00:30:43.615 "trtype": "$TEST_TRANSPORT", 00:30:43.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.615 "adrfam": "ipv4", 00:30:43.615 "trsvcid": "$NVMF_PORT", 00:30:43.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.615 "hdgst": ${hdgst:-false}, 00:30:43.615 "ddgst": ${ddgst:-false} 00:30:43.615 }, 00:30:43.615 "method": "bdev_nvme_attach_controller" 00:30:43.615 } 00:30:43.615 EOF 00:30:43.615 )") 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:43.615 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.616 { 00:30:43.616 "params": { 00:30:43.616 "name": "Nvme$subsystem", 00:30:43.616 "trtype": "$TEST_TRANSPORT", 00:30:43.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.616 "adrfam": "ipv4", 00:30:43.616 "trsvcid": "$NVMF_PORT", 00:30:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.616 "hdgst": ${hdgst:-false}, 00:30:43.616 "ddgst": ${ddgst:-false} 00:30:43.616 }, 00:30:43.616 "method": "bdev_nvme_attach_controller" 00:30:43.616 } 00:30:43.616 EOF 00:30:43.616 )") 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:43.616 "params": { 00:30:43.616 "name": "Nvme0", 00:30:43.616 "trtype": "tcp", 00:30:43.616 "traddr": "10.0.0.2", 00:30:43.616 "adrfam": "ipv4", 00:30:43.616 "trsvcid": "4420", 00:30:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:43.616 "hdgst": false, 00:30:43.616 "ddgst": false 00:30:43.616 }, 00:30:43.616 "method": "bdev_nvme_attach_controller" 00:30:43.616 },{ 00:30:43.616 "params": { 00:30:43.616 "name": "Nvme1", 00:30:43.616 "trtype": "tcp", 00:30:43.616 "traddr": "10.0.0.2", 00:30:43.616 "adrfam": "ipv4", 00:30:43.616 "trsvcid": "4420", 00:30:43.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:43.616 "hdgst": false, 00:30:43.616 "ddgst": false 00:30:43.616 }, 00:30:43.616 "method": "bdev_nvme_attach_controller" 00:30:43.616 }' 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:43.616 10:56:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.616 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:43.616 ... 00:30:43.616 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:43.616 ... 00:30:43.616 fio-3.35 00:30:43.616 Starting 4 threads 00:30:43.616 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.920 00:30:48.920 filename0: (groupid=0, jobs=1): err= 0: pid=1048821: Mon Jun 10 10:56:12 2024 00:30:48.920 read: IOPS=2120, BW=16.6MiB/s (17.4MB/s)(82.9MiB/5002msec) 00:30:48.920 slat (nsec): min=5628, max=39610, avg=7654.73, stdev=2705.49 00:30:48.920 clat (usec): min=2005, max=6160, avg=3751.72, stdev=476.69 00:30:48.920 lat (usec): min=2030, max=6167, avg=3759.37, stdev=476.65 00:30:48.920 clat percentiles (usec): 00:30:48.920 | 1.00th=[ 2769], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3523], 00:30:48.920 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3752], 00:30:48.920 | 70.00th=[ 3785], 80.00th=[ 3785], 90.00th=[ 4113], 95.00th=[ 4883], 00:30:48.920 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[ 6063], 00:30:48.920 | 99.99th=[ 6128] 00:30:48.920 bw ( KiB/s): min=16448, max=17376, per=25.12%, avg=16945.78, stdev=321.24, samples=9 00:30:48.920 iops : min= 2056, max= 2172, avg=2118.22, stdev=40.16, samples=9 00:30:48.920 lat (msec) : 4=88.75%, 10=11.25% 00:30:48.920 cpu : usr=96.26%, sys=3.50%, ctx=8, majf=0, minf=2 00:30:48.920 IO depths : 1=0.2%, 2=0.7%, 4=69.5%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.920 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.920 issued rwts: total=10609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.920 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:48.920 filename0: (groupid=0, jobs=1): err= 0: pid=1048822: Mon Jun 10 10:56:12 2024 00:30:48.920 read: IOPS=2077, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5002msec) 00:30:48.920 slat (nsec): min=5619, max=39738, avg=7829.98, stdev=2898.51 00:30:48.920 clat (usec): min=1657, max=8319, avg=3829.08, stdev=570.30 00:30:48.920 lat (usec): min=1663, max=8343, avg=3836.91, stdev=570.20 00:30:48.920 clat percentiles (usec): 00:30:48.920 | 1.00th=[ 2835], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3523], 00:30:48.920 | 30.00th=[ 3621], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3752], 00:30:48.920 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4555], 95.00th=[ 5407], 00:30:48.920 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6718], 99.95th=[ 7046], 00:30:48.920 | 99.99th=[ 8291] 00:30:48.920 bw ( KiB/s): min=16384, max=16848, per=24.66%, avg=16629.33, stdev=171.77, samples=9 00:30:48.920 iops : min= 2048, max= 2106, avg=2078.67, stdev=21.47, samples=9 00:30:48.921 lat (msec) : 2=0.03%, 4=85.17%, 10=14.80% 00:30:48.921 cpu : usr=96.60%, sys=3.14%, ctx=10, majf=0, minf=9 00:30:48.921 IO depths : 1=0.2%, 2=0.7%, 4=72.3%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.921 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.921 issued rwts: total=10391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.921 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:48.921 filename1: (groupid=0, jobs=1): err= 0: pid=1048823: Mon Jun 10 10:56:12 2024 00:30:48.921 read: IOPS=2116, BW=16.5MiB/s (17.3MB/s)(82.7MiB/5002msec) 00:30:48.921 slat (nsec): min=5620, max=25580, avg=6437.00, stdev=1921.08 00:30:48.921 clat (usec): min=1406, max=7702, avg=3761.26, stdev=477.31 00:30:48.921 lat (usec): min=1412, max=7726, avg=3767.70, stdev=477.31 00:30:48.921 clat percentiles (usec): 00:30:48.921 | 1.00th=[ 2900], 5.00th=[ 3261], 10.00th=[ 3392], 20.00th=[ 3523], 00:30:48.921 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3752], 00:30:48.921 | 70.00th=[ 3785], 80.00th=[ 3785], 90.00th=[ 4113], 95.00th=[ 4883], 00:30:48.921 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6390], 99.95th=[ 6587], 00:30:48.921 | 99.99th=[ 7635] 00:30:48.921 bw ( KiB/s): min=16512, max=17376, per=25.13%, avg=16951.22, stdev=299.34, samples=9 00:30:48.921 iops : min= 2064, max= 2172, avg=2118.89, stdev=37.43, samples=9 00:30:48.921 lat (msec) : 2=0.19%, 4=88.88%, 10=10.94% 00:30:48.921 cpu : usr=96.66%, sys=3.10%, ctx=13, majf=0, minf=9 00:30:48.921 IO depths : 1=0.1%, 2=0.5%, 4=71.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.921 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.921 issued rwts: total=10589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.921 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:48.921 filename1: (groupid=0, jobs=1): err= 0: pid=1048824: Mon Jun 10 10:56:12 2024 00:30:48.921 read: IOPS=2117, BW=16.5MiB/s (17.3MB/s)(82.8MiB/5004msec) 00:30:48.921 slat (usec): min=5, max=565, avg= 8.08, stdev= 6.23 00:30:48.921 clat (usec): min=1898, max=6856, avg=3755.04, stdev=462.62 00:30:48.921 lat (usec): min=1907, max=6865, avg=3763.12, stdev=462.65 00:30:48.921 clat percentiles (usec): 00:30:48.921 | 1.00th=[ 2802], 5.00th=[ 3228], 10.00th=[ 3392], 20.00th=[ 3523], 00:30:48.921 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3752], 00:30:48.921 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4146], 95.00th=[ 4817], 00:30:48.921 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 6194], 99.95th=[ 6521], 00:30:48.921 | 99.99th=[ 6849] 00:30:48.921 bw ( KiB/s): min=16400, max=17408, per=25.13%, avg=16947.20, stdev=335.94, samples=10 00:30:48.921 iops : min= 2050, max= 2176, avg=2118.40, stdev=41.99, samples=10 00:30:48.921 lat (msec) : 2=0.01%, 4=88.02%, 10=11.97% 00:30:48.921 cpu : usr=96.58%, sys=3.16%, ctx=8, majf=0, minf=9 00:30:48.921 IO depths : 1=0.2%, 2=0.7%, 4=70.5%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.921 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.921 issued rwts: total=10598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.921 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:48.921 00:30:48.921 Run status group 0 (all jobs): 00:30:48.921 READ: bw=65.9MiB/s (69.1MB/s), 16.2MiB/s-16.6MiB/s (17.0MB/s-17.4MB/s), io=330MiB (346MB), run=5002-5004msec 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.921 00:30:48.921 real 0m24.378s 00:30:48.921 user 5m14.975s 00:30:48.921 sys 0m3.706s 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 ************************************ 00:30:48.921 END TEST fio_dif_rand_params 00:30:48.921 ************************************ 00:30:48.921 10:56:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:48.921 10:56:12 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:48.921 10:56:12 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 ************************************ 00:30:48.921 START TEST fio_dif_digest 00:30:48.921 ************************************ 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 bdev_null0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:48.921 [2024-06-10 10:56:12.857276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.921 10:56:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:48.921 { 00:30:48.921 "params": { 00:30:48.921 "name": "Nvme$subsystem", 00:30:48.921 "trtype": "$TEST_TRANSPORT", 00:30:48.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.921 "adrfam": "ipv4", 00:30:48.921 "trsvcid": "$NVMF_PORT", 00:30:48.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.921 "hdgst": ${hdgst:-false}, 00:30:48.921 "ddgst": ${ddgst:-false} 00:30:48.921 }, 00:30:48.921 "method": "bdev_nvme_attach_controller" 00:30:48.921 } 00:30:48.921 EOF 00:30:48.921 )") 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:48.922 "params": { 00:30:48.922 "name": "Nvme0", 00:30:48.922 "trtype": "tcp", 00:30:48.922 "traddr": "10.0.0.2", 00:30:48.922 "adrfam": "ipv4", 00:30:48.922 "trsvcid": "4420", 00:30:48.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:48.922 "hdgst": true, 00:30:48.922 "ddgst": true 00:30:48.922 }, 00:30:48.922 "method": "bdev_nvme_attach_controller" 00:30:48.922 }' 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:48.922 10:56:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.181 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:49.181 ... 00:30:49.181 fio-3.35 00:30:49.181 Starting 3 threads 00:30:49.181 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.404 00:31:01.404 filename0: (groupid=0, jobs=1): err= 0: pid=1050346: Mon Jun 10 10:56:23 2024 00:31:01.404 read: IOPS=151, BW=19.0MiB/s (19.9MB/s)(191MiB/10031msec) 00:31:01.404 slat (nsec): min=6072, max=34967, avg=8136.10, stdev=1353.51 00:31:01.404 clat (usec): min=7570, max=99378, avg=19732.84, stdev=14198.24 00:31:01.404 lat (usec): min=7580, max=99385, avg=19740.98, stdev=14198.24 00:31:01.404 clat percentiles (usec): 00:31:01.404 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[11600], 20.00th=[13698], 00:31:01.404 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15401], 60.00th=[15795], 00:31:01.404 | 70.00th=[16319], 80.00th=[17171], 90.00th=[54264], 95.00th=[56361], 00:31:01.404 | 99.00th=[58983], 99.50th=[95945], 99.90th=[99091], 99.95th=[99091], 00:31:01.404 | 99.99th=[99091] 00:31:01.404 bw ( KiB/s): min=13056, max=24832, per=25.88%, avg=19468.80, stdev=2709.85, samples=20 00:31:01.404 iops : min= 102, max= 194, avg=152.10, stdev=21.17, samples=20 00:31:01.404 lat (msec) : 10=2.36%, 20=86.09%, 50=0.13%, 100=11.42% 00:31:01.404 cpu : usr=96.18%, sys=3.56%, ctx=18, majf=0, minf=50 00:31:01.404 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.404 issued rwts: total=1524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.404 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.404 filename0: (groupid=0, jobs=1): err= 0: pid=1050347: Mon Jun 10 10:56:23 2024 00:31:01.404 read: IOPS=229, BW=28.7MiB/s (30.0MB/s)(287MiB/10006msec) 00:31:01.404 slat (nsec): min=8549, max=36036, avg=9671.68, stdev=1109.25 00:31:01.404 clat (usec): min=6879, max=57321, avg=13073.16, stdev=3859.21 00:31:01.404 lat (usec): min=6888, max=57330, avg=13082.83, stdev=3859.26 00:31:01.404 clat percentiles (usec): 00:31:01.404 | 1.00th=[ 8029], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11076], 00:31:01.404 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13042], 60.00th=[13566], 00:31:01.405 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15270], 95.00th=[15795], 00:31:01.405 | 99.00th=[17171], 99.50th=[52167], 99.90th=[56361], 99.95th=[56361], 00:31:01.405 | 99.99th=[57410] 00:31:01.405 bw ( KiB/s): min=25344, max=32256, per=39.06%, avg=29386.11, stdev=1899.66, samples=19 00:31:01.405 iops : min= 198, max= 252, avg=229.58, stdev=14.84, samples=19 00:31:01.405 lat (msec) : 10=7.11%, 20=92.24%, 100=0.65% 00:31:01.405 cpu : usr=96.47%, sys=3.27%, ctx=21, majf=0, minf=142 00:31:01.405 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.405 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.405 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.405 filename0: (groupid=0, jobs=1): err= 0: pid=1050348: Mon Jun 10 10:56:23 2024 00:31:01.405 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(261MiB/10047msec) 00:31:01.405 slat (nsec): min=6082, max=60365, avg=8142.30, stdev=2316.40 00:31:01.405 clat (usec): min=7138, max=97418, avg=14410.20, stdev=8415.06 00:31:01.405 lat (usec): min=7145, max=97428, avg=14418.34, stdev=8415.19 00:31:01.405 clat percentiles (usec): 00:31:01.405 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10814], 00:31:01.405 | 30.00th=[11731], 40.00th=[12780], 50.00th=[13304], 60.00th=[13829], 00:31:01.405 | 70.00th=[14353], 80.00th=[14877], 90.00th=[15795], 95.00th=[16712], 00:31:01.405 | 99.00th=[55837], 99.50th=[56886], 99.90th=[94897], 99.95th=[95945], 00:31:01.405 | 99.99th=[96994] 00:31:01.405 bw ( KiB/s): min=20992, max=31488, per=35.48%, avg=26688.00, stdev=2779.48, samples=20 00:31:01.405 iops : min= 164, max= 246, avg=208.50, stdev=21.71, samples=20 00:31:01.405 lat (msec) : 10=9.58%, 20=86.97%, 50=0.14%, 100=3.31% 00:31:01.405 cpu : usr=96.41%, sys=3.34%, ctx=21, majf=0, minf=166 00:31:01.405 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.405 issued rwts: total=2087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.405 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.405 00:31:01.405 Run status group 0 (all jobs): 00:31:01.405 READ: bw=73.5MiB/s (77.0MB/s), 19.0MiB/s-28.7MiB/s (19.9MB/s-30.0MB/s), io=738MiB (774MB), run=10006-10047msec 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.405 00:31:01.405 real 0m11.139s 00:31:01.405 user 0m41.359s 00:31:01.405 sys 0m1.346s 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:01.405 10:56:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:01.405 ************************************ 00:31:01.405 END TEST fio_dif_digest 00:31:01.405 ************************************ 00:31:01.405 10:56:23 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:01.405 10:56:23 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:01.405 10:56:23 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:01.405 10:56:23 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:01.405 10:56:23 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:01.405 10:56:23 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:01.405 10:56:23 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:01.405 10:56:23 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:01.405 rmmod nvme_tcp 00:31:01.405 rmmod nvme_fabrics 00:31:01.405 rmmod nvme_keyring 00:31:01.405 10:56:24 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:01.405 10:56:24 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:01.405 10:56:24 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:01.405 10:56:24 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1039873 ']' 00:31:01.405 10:56:24 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1039873 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 1039873 ']' 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 1039873 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1039873 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1039873' 00:31:01.405 killing process with pid 1039873 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@968 -- # kill 1039873 00:31:01.405 [2024-06-10 10:56:24.140437] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:01.405 10:56:24 nvmf_dif -- common/autotest_common.sh@973 -- # wait 1039873 00:31:01.405 10:56:24 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:01.405 10:56:24 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:03.320 Waiting for block devices as requested 00:31:03.582 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:03.582 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:03.582 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:03.842 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:03.842 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:03.842 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:04.104 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:04.104 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:04.104 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:04.364 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:04.364 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:04.364 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:04.364 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:04.625 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:04.625 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:04.625 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:04.886 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:04.886 10:56:28 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:04.886 10:56:28 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:04.886 10:56:28 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:04.886 10:56:28 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:04.886 10:56:28 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.886 10:56:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:04.886 10:56:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.799 10:56:31 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:06.799 00:31:06.799 real 1m17.316s 00:31:06.799 user 7m56.266s 00:31:06.799 sys 0m19.083s 00:31:06.799 10:56:31 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:06.799 10:56:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.799 ************************************ 00:31:06.799 END TEST nvmf_dif 00:31:06.799 ************************************ 00:31:06.799 10:56:31 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:06.799 10:56:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:06.799 10:56:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:06.799 10:56:31 -- common/autotest_common.sh@10 -- # set +x 00:31:07.061 ************************************ 00:31:07.061 START TEST nvmf_abort_qd_sizes 00:31:07.061 ************************************ 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:07.061 * Looking for test storage... 00:31:07.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:07.061 10:56:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:15.209 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:15.209 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:15.209 Found net devices under 0000:31:00.0: cvl_0_0 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:15.209 Found net devices under 0000:31:00.1: cvl_0_1 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:15.209 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:15.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:31:15.210 00:31:15.210 --- 10.0.0.2 ping statistics --- 00:31:15.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.210 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:31:15.210 00:31:15.210 --- 10.0.0.1 ping statistics --- 00:31:15.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.210 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:15.210 10:56:38 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:18.514 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:18.514 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1059775 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1059775 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 1059775 ']' 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:18.514 10:56:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:18.514 [2024-06-10 10:56:42.451579] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:31:18.514 [2024-06-10 10:56:42.451636] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.514 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.514 [2024-06-10 10:56:42.526807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:18.514 [2024-06-10 10:56:42.604540] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.514 [2024-06-10 10:56:42.604582] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.514 [2024-06-10 10:56:42.604590] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.514 [2024-06-10 10:56:42.604597] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.514 [2024-06-10 10:56:42.604603] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.514 [2024-06-10 10:56:42.604645] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.514 [2024-06-10 10:56:42.604674] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.514 [2024-06-10 10:56:42.604832] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.514 [2024-06-10 10:56:42.604833] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:19.084 10:56:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.084 ************************************ 00:31:19.084 START TEST spdk_target_abort 00:31:19.084 ************************************ 00:31:19.084 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:31:19.084 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:19.084 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:19.084 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.084 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.389 spdk_targetn1 00:31:19.389 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.389 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.390 [2024-06-10 10:56:43.620301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.390 [2024-06-10 10:56:43.660362] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:19.390 [2024-06-10 10:56:43.660611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:19.390 10:56:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:19.650 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.650 [2024-06-10 10:56:43.834812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:512 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:19.650 [2024-06-10 10:56:43.834838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0043 p:1 m:0 dnr:0 00:31:19.650 [2024-06-10 10:56:43.914655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3056 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:19.650 [2024-06-10 10:56:43.914674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:22.949 Initializing NVMe Controllers 00:31:22.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:22.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:22.949 Initialization complete. Launching workers. 00:31:22.949 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11079, failed: 2 00:31:22.949 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2868, failed to submit 8213 00:31:22.949 success 740, unsuccess 2128, failed 0 00:31:22.949 10:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:22.949 10:56:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:22.949 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.949 [2024-06-10 10:56:47.039400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:832 len:8 PRP1 0x200007c60000 PRP2 0x0 00:31:22.949 [2024-06-10 10:56:47.039443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0074 p:1 m:0 dnr:0 00:31:24.366 [2024-06-10 10:56:48.533342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:34696 len:8 PRP1 0x200007c44000 PRP2 0x0 00:31:24.366 [2024-06-10 10:56:48.533380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00f2 p:1 m:0 dnr:0 00:31:26.279 Initializing NVMe Controllers 00:31:26.279 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:26.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:26.279 Initialization complete. Launching workers. 00:31:26.279 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8551, failed: 2 00:31:26.279 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1201, failed to submit 7352 00:31:26.279 success 340, unsuccess 861, failed 0 00:31:26.279 10:56:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:26.279 10:56:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:26.279 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.663 [2024-06-10 10:56:51.818920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:155 nsid:1 lba:165512 len:8 PRP1 0x2000078fe000 PRP2 0x0 00:31:27.663 [2024-06-10 10:56:51.818951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:155 cdw0:0 sqhd:001c p:1 m:0 dnr:0 00:31:29.577 Initializing NVMe Controllers 00:31:29.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:29.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:29.577 Initialization complete. Launching workers. 00:31:29.577 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41842, failed: 1 00:31:29.577 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2559, failed to submit 39284 00:31:29.577 success 597, unsuccess 1962, failed 0 00:31:29.577 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:29.577 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.577 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.577 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.577 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:29.577 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.577 10:56:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:30.960 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:30.960 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1059775 00:31:30.960 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 1059775 ']' 00:31:30.960 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 1059775 00:31:30.960 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:31:30.960 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:30.960 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1059775 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1059775' 00:31:31.221 killing process with pid 1059775 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 1059775 00:31:31.221 [2024-06-10 10:56:55.277423] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 1059775 00:31:31.221 00:31:31.221 real 0m12.101s 00:31:31.221 user 0m49.046s 00:31:31.221 sys 0m1.927s 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:31.221 ************************************ 00:31:31.221 END TEST spdk_target_abort 00:31:31.221 ************************************ 00:31:31.221 10:56:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:31.221 10:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:31.221 10:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:31.221 10:56:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:31.221 ************************************ 00:31:31.221 START TEST kernel_target_abort 00:31:31.221 ************************************ 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:31.221 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:31.481 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:31.481 10:56:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:34.779 Waiting for block devices as requested 00:31:34.779 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:34.779 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:34.779 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:34.779 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:34.779 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:34.779 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:35.040 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:35.040 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:35.040 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:35.299 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:35.299 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:35.299 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:35.560 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:35.560 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:35.560 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:35.560 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:35.820 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:35.820 No valid GPT data, bailing 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:35.820 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:35.821 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:35.821 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:35.821 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:35.821 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:35.821 10:56:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:31:35.821 00:31:35.821 Discovery Log Number of Records 2, Generation counter 2 00:31:35.821 =====Discovery Log Entry 0====== 00:31:35.821 trtype: tcp 00:31:35.821 adrfam: ipv4 00:31:35.821 subtype: current discovery subsystem 00:31:35.821 treq: not specified, sq flow control disable supported 00:31:35.821 portid: 1 00:31:35.821 trsvcid: 4420 00:31:35.821 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:35.821 traddr: 10.0.0.1 00:31:35.821 eflags: none 00:31:35.821 sectype: none 00:31:35.821 =====Discovery Log Entry 1====== 00:31:35.821 trtype: tcp 00:31:35.821 adrfam: ipv4 00:31:35.821 subtype: nvme subsystem 00:31:35.821 treq: not specified, sq flow control disable supported 00:31:35.821 portid: 1 00:31:35.821 trsvcid: 4420 00:31:35.821 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:35.821 traddr: 10.0.0.1 00:31:35.821 eflags: none 00:31:35.821 sectype: none 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:35.821 10:57:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:35.821 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.118 Initializing NVMe Controllers 00:31:39.118 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:39.118 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:39.118 Initialization complete. Launching workers. 00:31:39.118 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54744, failed: 0 00:31:39.118 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 54744, failed to submit 0 00:31:39.118 success 0, unsuccess 54744, failed 0 00:31:39.118 10:57:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:39.118 10:57:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.118 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.418 Initializing NVMe Controllers 00:31:42.418 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:42.418 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:42.418 Initialization complete. Launching workers. 00:31:42.418 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96374, failed: 0 00:31:42.418 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24298, failed to submit 72076 00:31:42.418 success 0, unsuccess 24298, failed 0 00:31:42.418 10:57:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:42.418 10:57:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:42.418 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.963 Initializing NVMe Controllers 00:31:44.963 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:44.963 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:44.963 Initialization complete. Launching workers. 00:31:44.963 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92312, failed: 0 00:31:44.963 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23062, failed to submit 69250 00:31:44.963 success 0, unsuccess 23062, failed 0 00:31:44.963 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:44.963 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:44.963 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:45.224 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:45.224 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:45.224 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:45.224 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:45.224 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:45.224 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:45.224 10:57:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:48.528 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:48.528 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:50.444 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:50.444 00:31:50.444 real 0m19.129s 00:31:50.444 user 0m8.505s 00:31:50.444 sys 0m5.666s 00:31:50.444 10:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:50.444 10:57:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:50.444 ************************************ 00:31:50.444 END TEST kernel_target_abort 00:31:50.444 ************************************ 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:50.444 rmmod nvme_tcp 00:31:50.444 rmmod nvme_fabrics 00:31:50.444 rmmod nvme_keyring 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1059775 ']' 00:31:50.444 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1059775 00:31:50.705 10:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 1059775 ']' 00:31:50.705 10:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 1059775 00:31:50.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1059775) - No such process 00:31:50.705 10:57:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 1059775 is not found' 00:31:50.705 Process with pid 1059775 is not found 00:31:50.705 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:50.705 10:57:14 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:54.009 Waiting for block devices as requested 00:31:54.009 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:54.009 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:54.270 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:54.271 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:54.271 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:54.271 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:54.532 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:54.532 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:54.532 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:54.794 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:54.794 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:55.055 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:55.055 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:55.055 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:55.055 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:55.316 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:55.316 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:55.316 10:57:19 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:55.316 10:57:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:55.316 10:57:19 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:55.316 10:57:19 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:55.316 10:57:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.316 10:57:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.316 10:57:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.862 10:57:21 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:57.862 00:31:57.862 real 0m50.425s 00:31:57.862 user 1m2.771s 00:31:57.862 sys 0m18.184s 00:31:57.862 10:57:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:57.862 10:57:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:57.862 ************************************ 00:31:57.862 END TEST nvmf_abort_qd_sizes 00:31:57.862 ************************************ 00:31:57.862 10:57:21 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:57.862 10:57:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:31:57.862 10:57:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:57.862 10:57:21 -- common/autotest_common.sh@10 -- # set +x 00:31:57.862 ************************************ 00:31:57.862 START TEST keyring_file 00:31:57.862 ************************************ 00:31:57.862 10:57:21 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:57.862 * Looking for test storage... 00:31:57.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:57.862 10:57:21 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:57.862 10:57:21 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.862 10:57:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:57.862 10:57:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.862 10:57:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.862 10:57:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.862 10:57:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.863 10:57:21 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.863 10:57:21 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.863 10:57:21 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.863 10:57:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.863 10:57:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.863 10:57:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.863 10:57:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:57.863 10:57:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YYNIMCxA1i 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YYNIMCxA1i 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YYNIMCxA1i 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.YYNIMCxA1i 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.e5BdJaNplD 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:57.863 10:57:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.e5BdJaNplD 00:31:57.863 10:57:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.e5BdJaNplD 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.e5BdJaNplD 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=1070383 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1070383 00:31:57.863 10:57:21 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1070383 ']' 00:31:57.863 10:57:21 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.863 10:57:21 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:57.863 10:57:21 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.863 10:57:21 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:57.863 10:57:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:57.863 10:57:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:57.863 [2024-06-10 10:57:21.909773] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:31:57.863 [2024-06-10 10:57:21.909848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070383 ] 00:31:57.863 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.863 [2024-06-10 10:57:21.977049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.863 [2024-06-10 10:57:22.052866] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.434 10:57:22 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:58.434 10:57:22 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:31:58.434 10:57:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:58.434 10:57:22 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.434 10:57:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.434 [2024-06-10 10:57:22.691279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.434 null0 00:31:58.694 [2024-06-10 10:57:22.723293] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:58.694 [2024-06-10 10:57:22.723338] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:58.694 [2024-06-10 10:57:22.723574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:58.694 [2024-06-10 10:57:22.731329] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:58.694 10:57:22 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.694 [2024-06-10 10:57:22.747369] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:58.694 request: 00:31:58.694 { 00:31:58.694 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.694 "secure_channel": false, 00:31:58.694 "listen_address": { 00:31:58.694 "trtype": "tcp", 00:31:58.694 "traddr": "127.0.0.1", 00:31:58.694 "trsvcid": "4420" 00:31:58.694 }, 00:31:58.694 "method": "nvmf_subsystem_add_listener", 00:31:58.694 "req_id": 1 00:31:58.694 } 00:31:58.694 Got JSON-RPC error response 00:31:58.694 response: 00:31:58.694 { 00:31:58.694 "code": -32602, 00:31:58.694 "message": "Invalid parameters" 00:31:58.694 } 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:58.694 10:57:22 keyring_file -- keyring/file.sh@46 -- # bperfpid=1070698 00:31:58.694 10:57:22 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1070698 /var/tmp/bperf.sock 00:31:58.694 10:57:22 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1070698 ']' 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:58.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:58.694 10:57:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.694 [2024-06-10 10:57:22.803904] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:31:58.694 [2024-06-10 10:57:22.803949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070698 ] 00:31:58.694 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.694 [2024-06-10 10:57:22.879238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.694 [2024-06-10 10:57:22.943665] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.636 10:57:23 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:59.636 10:57:23 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:31:59.636 10:57:23 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YYNIMCxA1i 00:31:59.636 10:57:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YYNIMCxA1i 00:31:59.636 10:57:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.e5BdJaNplD 00:31:59.636 10:57:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.e5BdJaNplD 00:31:59.636 10:57:23 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:59.636 10:57:23 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:59.636 10:57:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.636 10:57:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:59.636 10:57:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.897 10:57:24 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.YYNIMCxA1i == \/\t\m\p\/\t\m\p\.\Y\Y\N\I\M\C\x\A\1\i ]] 00:31:59.897 10:57:24 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:59.897 10:57:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:59.897 10:57:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.897 10:57:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:59.897 10:57:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.158 10:57:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.e5BdJaNplD == \/\t\m\p\/\t\m\p\.\e\5\B\d\J\a\N\p\l\D ]] 00:32:00.158 10:57:24 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.158 10:57:24 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:00.158 10:57:24 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.158 10:57:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.419 10:57:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:00.419 10:57:24 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.419 10:57:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.419 [2024-06-10 10:57:24.648471] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:00.680 nvme0n1 00:32:00.680 10:57:24 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:00.680 10:57:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.680 10:57:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.680 10:57:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.681 10:57:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.681 10:57:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.681 10:57:24 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:00.681 10:57:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:00.681 10:57:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:00.681 10:57:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.681 10:57:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.681 10:57:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.681 10:57:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.941 10:57:25 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:00.941 10:57:25 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:00.941 Running I/O for 1 seconds... 00:32:01.884 00:32:01.884 Latency(us) 00:32:01.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.884 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:01.884 nvme0n1 : 1.01 10158.06 39.68 0.00 0.00 12514.10 4505.60 19660.80 00:32:01.884 =================================================================================================================== 00:32:01.884 Total : 10158.06 39.68 0.00 0.00 12514.10 4505.60 19660.80 00:32:01.885 0 00:32:02.149 10:57:26 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:02.149 10:57:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:02.149 10:57:26 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:02.149 10:57:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:02.149 10:57:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.149 10:57:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.149 10:57:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.149 10:57:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:02.475 10:57:26 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:02.475 10:57:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:02.475 10:57:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:02.475 10:57:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.475 10:57:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.475 10:57:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.475 10:57:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:02.475 10:57:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:02.475 10:57:26 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.475 10:57:26 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:02.475 10:57:26 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.475 10:57:26 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:02.475 10:57:26 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:02.475 10:57:26 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:02.475 10:57:26 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:02.476 10:57:26 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.476 10:57:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.740 [2024-06-10 10:57:26.819873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:02.740 [2024-06-10 10:57:26.820676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9114a0 (107): Transport endpoint is not connected 00:32:02.740 [2024-06-10 10:57:26.821673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9114a0 (9): Bad file descriptor 00:32:02.740 [2024-06-10 10:57:26.822675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.740 [2024-06-10 10:57:26.822682] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:02.740 [2024-06-10 10:57:26.822687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.740 request: 00:32:02.740 { 00:32:02.740 "name": "nvme0", 00:32:02.740 "trtype": "tcp", 00:32:02.740 "traddr": "127.0.0.1", 00:32:02.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:02.740 "adrfam": "ipv4", 00:32:02.740 "trsvcid": "4420", 00:32:02.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:02.740 "psk": "key1", 00:32:02.740 "method": "bdev_nvme_attach_controller", 00:32:02.740 "req_id": 1 00:32:02.740 } 00:32:02.740 Got JSON-RPC error response 00:32:02.740 response: 00:32:02.740 { 00:32:02.740 "code": -5, 00:32:02.740 "message": "Input/output error" 00:32:02.740 } 00:32:02.740 10:57:26 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:02.740 10:57:26 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:02.740 10:57:26 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:02.740 10:57:26 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:02.740 10:57:26 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:02.740 10:57:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.740 10:57:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:02.740 10:57:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.740 10:57:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.740 10:57:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:02.740 10:57:26 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:02.740 10:57:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:02.740 10:57:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.740 10:57:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:02.740 10:57:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.740 10:57:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.740 10:57:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:03.001 10:57:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:03.001 10:57:27 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:03.001 10:57:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:03.262 10:57:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:03.262 10:57:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:03.262 10:57:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:03.262 10:57:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.262 10:57:27 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:03.523 10:57:27 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:03.523 10:57:27 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.YYNIMCxA1i 00:32:03.523 10:57:27 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.YYNIMCxA1i 00:32:03.523 10:57:27 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:03.523 10:57:27 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.YYNIMCxA1i 00:32:03.524 10:57:27 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:03.524 10:57:27 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:03.524 10:57:27 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:03.524 10:57:27 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:03.524 10:57:27 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YYNIMCxA1i 00:32:03.524 10:57:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YYNIMCxA1i 00:32:03.524 [2024-06-10 10:57:27.747477] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YYNIMCxA1i': 0100660 00:32:03.524 [2024-06-10 10:57:27.747495] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:03.524 request: 00:32:03.524 { 00:32:03.524 "name": "key0", 00:32:03.524 "path": "/tmp/tmp.YYNIMCxA1i", 00:32:03.524 "method": "keyring_file_add_key", 00:32:03.524 "req_id": 1 00:32:03.524 } 00:32:03.524 Got JSON-RPC error response 00:32:03.524 response: 00:32:03.524 { 00:32:03.524 "code": -1, 00:32:03.524 "message": "Operation not permitted" 00:32:03.524 } 00:32:03.524 10:57:27 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:03.524 10:57:27 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:03.524 10:57:27 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:03.524 10:57:27 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:03.524 10:57:27 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.YYNIMCxA1i 00:32:03.524 10:57:27 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YYNIMCxA1i 00:32:03.524 10:57:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YYNIMCxA1i 00:32:03.785 10:57:27 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.YYNIMCxA1i 00:32:03.785 10:57:27 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:03.785 10:57:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:03.785 10:57:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.785 10:57:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.785 10:57:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:03.785 10:57:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:04.045 10:57:28 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:04.045 10:57:28 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.045 10:57:28 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:04.045 10:57:28 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.045 10:57:28 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:04.045 10:57:28 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:04.045 10:57:28 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:04.045 10:57:28 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:04.045 10:57:28 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.045 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.045 [2024-06-10 10:57:28.220670] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.YYNIMCxA1i': No such file or directory 00:32:04.045 [2024-06-10 10:57:28.220686] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:04.045 [2024-06-10 10:57:28.220703] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:04.045 [2024-06-10 10:57:28.220708] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:04.045 [2024-06-10 10:57:28.220713] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:04.046 request: 00:32:04.046 { 00:32:04.046 "name": "nvme0", 00:32:04.046 "trtype": "tcp", 00:32:04.046 "traddr": "127.0.0.1", 00:32:04.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.046 "adrfam": "ipv4", 00:32:04.046 "trsvcid": "4420", 00:32:04.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.046 "psk": "key0", 00:32:04.046 "method": "bdev_nvme_attach_controller", 00:32:04.046 "req_id": 1 00:32:04.046 } 00:32:04.046 Got JSON-RPC error response 00:32:04.046 response: 00:32:04.046 { 00:32:04.046 "code": -19, 00:32:04.046 "message": "No such device" 00:32:04.046 } 00:32:04.046 10:57:28 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:04.046 10:57:28 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:04.046 10:57:28 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:04.046 10:57:28 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:04.046 10:57:28 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:04.046 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:04.307 10:57:28 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.te0HYYIIt8 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:04.307 10:57:28 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:04.307 10:57:28 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.307 10:57:28 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:04.307 10:57:28 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:04.307 10:57:28 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:04.307 10:57:28 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.te0HYYIIt8 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.te0HYYIIt8 00:32:04.307 10:57:28 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.te0HYYIIt8 00:32:04.307 10:57:28 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.te0HYYIIt8 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.te0HYYIIt8 00:32:04.307 10:57:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.307 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.569 nvme0n1 00:32:04.569 10:57:28 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:04.569 10:57:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:04.569 10:57:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:04.569 10:57:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:04.569 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:04.569 10:57:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:04.830 10:57:28 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:04.830 10:57:28 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:04.830 10:57:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:05.091 10:57:29 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:05.091 10:57:29 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:05.091 10:57:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.091 10:57:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.091 10:57:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.091 10:57:29 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:05.091 10:57:29 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:05.091 10:57:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:05.091 10:57:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.091 10:57:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.091 10:57:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.091 10:57:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.352 10:57:29 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:05.352 10:57:29 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:05.352 10:57:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:05.352 10:57:29 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:05.352 10:57:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.352 10:57:29 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:05.613 10:57:29 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:05.613 10:57:29 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.te0HYYIIt8 00:32:05.613 10:57:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.te0HYYIIt8 00:32:05.873 10:57:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.e5BdJaNplD 00:32:05.873 10:57:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.e5BdJaNplD 00:32:05.873 10:57:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.873 10:57:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.134 nvme0n1 00:32:06.134 10:57:30 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:06.134 10:57:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:06.395 10:57:30 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:06.395 "subsystems": [ 00:32:06.395 { 00:32:06.395 "subsystem": "keyring", 00:32:06.395 "config": [ 00:32:06.395 { 00:32:06.395 "method": "keyring_file_add_key", 00:32:06.395 "params": { 00:32:06.395 "name": "key0", 00:32:06.395 "path": "/tmp/tmp.te0HYYIIt8" 00:32:06.395 } 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "method": "keyring_file_add_key", 00:32:06.395 "params": { 00:32:06.395 "name": "key1", 00:32:06.395 "path": "/tmp/tmp.e5BdJaNplD" 00:32:06.395 } 00:32:06.395 } 00:32:06.395 ] 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "subsystem": "iobuf", 00:32:06.395 "config": [ 00:32:06.395 { 00:32:06.395 "method": "iobuf_set_options", 00:32:06.395 "params": { 00:32:06.395 "small_pool_count": 8192, 00:32:06.395 "large_pool_count": 1024, 00:32:06.395 "small_bufsize": 8192, 00:32:06.395 "large_bufsize": 135168 00:32:06.395 } 00:32:06.395 } 00:32:06.395 ] 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "subsystem": "sock", 00:32:06.395 "config": [ 00:32:06.395 { 00:32:06.395 "method": "sock_set_default_impl", 00:32:06.395 "params": { 00:32:06.395 "impl_name": "posix" 00:32:06.395 } 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "method": "sock_impl_set_options", 00:32:06.395 "params": { 00:32:06.395 "impl_name": "ssl", 00:32:06.395 "recv_buf_size": 4096, 00:32:06.395 "send_buf_size": 4096, 00:32:06.395 "enable_recv_pipe": true, 00:32:06.395 "enable_quickack": false, 00:32:06.395 "enable_placement_id": 0, 00:32:06.395 "enable_zerocopy_send_server": true, 00:32:06.395 "enable_zerocopy_send_client": false, 00:32:06.395 "zerocopy_threshold": 0, 00:32:06.395 "tls_version": 0, 00:32:06.395 "enable_ktls": false 00:32:06.395 } 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "method": "sock_impl_set_options", 00:32:06.395 "params": { 00:32:06.395 "impl_name": "posix", 00:32:06.395 "recv_buf_size": 2097152, 00:32:06.395 "send_buf_size": 2097152, 00:32:06.395 "enable_recv_pipe": true, 00:32:06.395 "enable_quickack": false, 00:32:06.395 "enable_placement_id": 0, 00:32:06.395 "enable_zerocopy_send_server": true, 00:32:06.395 "enable_zerocopy_send_client": false, 00:32:06.395 "zerocopy_threshold": 0, 00:32:06.395 "tls_version": 0, 00:32:06.395 "enable_ktls": false 00:32:06.395 } 00:32:06.395 } 00:32:06.395 ] 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "subsystem": "vmd", 00:32:06.395 "config": [] 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "subsystem": "accel", 00:32:06.395 "config": [ 00:32:06.395 { 00:32:06.395 "method": "accel_set_options", 00:32:06.395 "params": { 00:32:06.395 "small_cache_size": 128, 00:32:06.395 "large_cache_size": 16, 00:32:06.395 "task_count": 2048, 00:32:06.395 "sequence_count": 2048, 00:32:06.395 "buf_count": 2048 00:32:06.395 } 00:32:06.395 } 00:32:06.395 ] 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "subsystem": "bdev", 00:32:06.395 "config": [ 00:32:06.395 { 00:32:06.395 "method": "bdev_set_options", 00:32:06.395 "params": { 00:32:06.395 "bdev_io_pool_size": 65535, 00:32:06.395 "bdev_io_cache_size": 256, 00:32:06.395 "bdev_auto_examine": true, 00:32:06.395 "iobuf_small_cache_size": 128, 00:32:06.395 "iobuf_large_cache_size": 16 00:32:06.395 } 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "method": "bdev_raid_set_options", 00:32:06.395 "params": { 00:32:06.395 "process_window_size_kb": 1024 00:32:06.395 } 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "method": "bdev_iscsi_set_options", 00:32:06.395 "params": { 00:32:06.395 "timeout_sec": 30 00:32:06.395 } 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "method": "bdev_nvme_set_options", 00:32:06.395 "params": { 00:32:06.395 "action_on_timeout": "none", 00:32:06.395 "timeout_us": 0, 00:32:06.395 "timeout_admin_us": 0, 00:32:06.395 "keep_alive_timeout_ms": 10000, 00:32:06.395 "arbitration_burst": 0, 00:32:06.395 "low_priority_weight": 0, 00:32:06.395 "medium_priority_weight": 0, 00:32:06.395 "high_priority_weight": 0, 00:32:06.395 "nvme_adminq_poll_period_us": 10000, 00:32:06.395 "nvme_ioq_poll_period_us": 0, 00:32:06.395 "io_queue_requests": 512, 00:32:06.395 "delay_cmd_submit": true, 00:32:06.395 "transport_retry_count": 4, 00:32:06.395 "bdev_retry_count": 3, 00:32:06.395 "transport_ack_timeout": 0, 00:32:06.395 "ctrlr_loss_timeout_sec": 0, 00:32:06.395 "reconnect_delay_sec": 0, 00:32:06.395 "fast_io_fail_timeout_sec": 0, 00:32:06.395 "disable_auto_failback": false, 00:32:06.395 "generate_uuids": false, 00:32:06.395 "transport_tos": 0, 00:32:06.395 "nvme_error_stat": false, 00:32:06.395 "rdma_srq_size": 0, 00:32:06.395 "io_path_stat": false, 00:32:06.395 "allow_accel_sequence": false, 00:32:06.395 "rdma_max_cq_size": 0, 00:32:06.395 "rdma_cm_event_timeout_ms": 0, 00:32:06.395 "dhchap_digests": [ 00:32:06.395 "sha256", 00:32:06.395 "sha384", 00:32:06.395 "sha512" 00:32:06.395 ], 00:32:06.395 "dhchap_dhgroups": [ 00:32:06.395 "null", 00:32:06.395 "ffdhe2048", 00:32:06.395 "ffdhe3072", 00:32:06.395 "ffdhe4096", 00:32:06.395 "ffdhe6144", 00:32:06.395 "ffdhe8192" 00:32:06.395 ] 00:32:06.395 } 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "method": "bdev_nvme_attach_controller", 00:32:06.395 "params": { 00:32:06.395 "name": "nvme0", 00:32:06.395 "trtype": "TCP", 00:32:06.395 "adrfam": "IPv4", 00:32:06.395 "traddr": "127.0.0.1", 00:32:06.395 "trsvcid": "4420", 00:32:06.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.395 "prchk_reftag": false, 00:32:06.395 "prchk_guard": false, 00:32:06.395 "ctrlr_loss_timeout_sec": 0, 00:32:06.395 "reconnect_delay_sec": 0, 00:32:06.395 "fast_io_fail_timeout_sec": 0, 00:32:06.395 "psk": "key0", 00:32:06.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:06.395 "hdgst": false, 00:32:06.395 "ddgst": false 00:32:06.395 } 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "method": "bdev_nvme_set_hotplug", 00:32:06.395 "params": { 00:32:06.395 "period_us": 100000, 00:32:06.395 "enable": false 00:32:06.395 } 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "method": "bdev_wait_for_examine" 00:32:06.395 } 00:32:06.395 ] 00:32:06.395 }, 00:32:06.395 { 00:32:06.395 "subsystem": "nbd", 00:32:06.395 "config": [] 00:32:06.395 } 00:32:06.395 ] 00:32:06.395 }' 00:32:06.395 10:57:30 keyring_file -- keyring/file.sh@114 -- # killprocess 1070698 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1070698 ']' 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1070698 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1070698 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1070698' 00:32:06.395 killing process with pid 1070698 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@968 -- # kill 1070698 00:32:06.395 Received shutdown signal, test time was about 1.000000 seconds 00:32:06.395 00:32:06.395 Latency(us) 00:32:06.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.395 =================================================================================================================== 00:32:06.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:06.395 10:57:30 keyring_file -- common/autotest_common.sh@973 -- # wait 1070698 00:32:06.656 10:57:30 keyring_file -- keyring/file.sh@117 -- # bperfpid=1072187 00:32:06.657 10:57:30 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1072187 /var/tmp/bperf.sock 00:32:06.657 10:57:30 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 1072187 ']' 00:32:06.657 10:57:30 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:06.657 10:57:30 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:06.657 10:57:30 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:06.657 10:57:30 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:06.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:06.657 10:57:30 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:06.657 10:57:30 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:06.657 "subsystems": [ 00:32:06.657 { 00:32:06.657 "subsystem": "keyring", 00:32:06.657 "config": [ 00:32:06.657 { 00:32:06.657 "method": "keyring_file_add_key", 00:32:06.657 "params": { 00:32:06.657 "name": "key0", 00:32:06.657 "path": "/tmp/tmp.te0HYYIIt8" 00:32:06.657 } 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "method": "keyring_file_add_key", 00:32:06.657 "params": { 00:32:06.657 "name": "key1", 00:32:06.657 "path": "/tmp/tmp.e5BdJaNplD" 00:32:06.657 } 00:32:06.657 } 00:32:06.657 ] 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "subsystem": "iobuf", 00:32:06.657 "config": [ 00:32:06.657 { 00:32:06.657 "method": "iobuf_set_options", 00:32:06.657 "params": { 00:32:06.657 "small_pool_count": 8192, 00:32:06.657 "large_pool_count": 1024, 00:32:06.657 "small_bufsize": 8192, 00:32:06.657 "large_bufsize": 135168 00:32:06.657 } 00:32:06.657 } 00:32:06.657 ] 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "subsystem": "sock", 00:32:06.657 "config": [ 00:32:06.657 { 00:32:06.657 "method": "sock_set_default_impl", 00:32:06.657 "params": { 00:32:06.657 "impl_name": "posix" 00:32:06.657 } 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "method": "sock_impl_set_options", 00:32:06.657 "params": { 00:32:06.657 "impl_name": "ssl", 00:32:06.657 "recv_buf_size": 4096, 00:32:06.657 "send_buf_size": 4096, 00:32:06.657 "enable_recv_pipe": true, 00:32:06.657 "enable_quickack": false, 00:32:06.657 "enable_placement_id": 0, 00:32:06.657 "enable_zerocopy_send_server": true, 00:32:06.657 "enable_zerocopy_send_client": false, 00:32:06.657 "zerocopy_threshold": 0, 00:32:06.657 "tls_version": 0, 00:32:06.657 "enable_ktls": false 00:32:06.657 } 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "method": "sock_impl_set_options", 00:32:06.657 "params": { 00:32:06.657 "impl_name": "posix", 00:32:06.657 "recv_buf_size": 2097152, 00:32:06.657 "send_buf_size": 2097152, 00:32:06.657 "enable_recv_pipe": true, 00:32:06.657 "enable_quickack": false, 00:32:06.657 "enable_placement_id": 0, 00:32:06.657 "enable_zerocopy_send_server": true, 00:32:06.657 "enable_zerocopy_send_client": false, 00:32:06.657 "zerocopy_threshold": 0, 00:32:06.657 "tls_version": 0, 00:32:06.657 "enable_ktls": false 00:32:06.657 } 00:32:06.657 } 00:32:06.657 ] 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "subsystem": "vmd", 00:32:06.657 "config": [] 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "subsystem": "accel", 00:32:06.657 "config": [ 00:32:06.657 { 00:32:06.657 "method": "accel_set_options", 00:32:06.657 "params": { 00:32:06.657 "small_cache_size": 128, 00:32:06.657 "large_cache_size": 16, 00:32:06.657 "task_count": 2048, 00:32:06.657 "sequence_count": 2048, 00:32:06.657 "buf_count": 2048 00:32:06.657 } 00:32:06.657 } 00:32:06.657 ] 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "subsystem": "bdev", 00:32:06.657 "config": [ 00:32:06.657 { 00:32:06.657 "method": "bdev_set_options", 00:32:06.657 "params": { 00:32:06.657 "bdev_io_pool_size": 65535, 00:32:06.657 "bdev_io_cache_size": 256, 00:32:06.657 "bdev_auto_examine": true, 00:32:06.657 "iobuf_small_cache_size": 128, 00:32:06.657 "iobuf_large_cache_size": 16 00:32:06.657 } 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "method": "bdev_raid_set_options", 00:32:06.657 "params": { 00:32:06.657 "process_window_size_kb": 1024 00:32:06.657 } 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "method": "bdev_iscsi_set_options", 00:32:06.657 "params": { 00:32:06.657 "timeout_sec": 30 00:32:06.657 } 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "method": "bdev_nvme_set_options", 00:32:06.657 "params": { 00:32:06.657 "action_on_timeout": "none", 00:32:06.657 "timeout_us": 0, 00:32:06.657 "timeout_admin_us": 0, 00:32:06.657 "keep_alive_timeout_ms": 10000, 00:32:06.657 "arbitration_burst": 0, 00:32:06.657 "low_priority_weight": 0, 00:32:06.657 "medium_priority_weight": 0, 00:32:06.657 "high_priority_weight": 0, 00:32:06.657 "nvme_adminq_poll_period_us": 10000, 00:32:06.657 "nvme_ioq_poll_period_us": 0, 00:32:06.657 "io_queue_requests": 512, 00:32:06.657 "delay_cmd_submit": true, 00:32:06.657 "transport_retry_count": 4, 00:32:06.657 "bdev_retry_count": 3, 00:32:06.657 "transport_ack_timeout": 0, 00:32:06.657 "ctrlr_loss_timeout_sec": 0, 00:32:06.657 "reconnect_delay_sec": 0, 00:32:06.657 "fast_io_fail_timeout_sec": 0, 00:32:06.657 "disable_auto_failback": false, 00:32:06.657 "generate_uuids": false, 00:32:06.657 "transport_tos": 0, 00:32:06.657 "nvme_error_stat": false, 00:32:06.657 "rdma_srq_size": 0, 00:32:06.657 "io_path_stat": false, 00:32:06.657 "allow_accel_sequence": false, 00:32:06.657 10:57:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:06.657 "rdma_max_cq_size": 0, 00:32:06.657 "rdma_cm_event_timeout_ms": 0, 00:32:06.657 "dhchap_digests": [ 00:32:06.657 "sha256", 00:32:06.657 "sha384", 00:32:06.657 "sha512" 00:32:06.657 ], 00:32:06.657 "dhchap_dhgroups": [ 00:32:06.657 "null", 00:32:06.657 "ffdhe2048", 00:32:06.657 "ffdhe3072", 00:32:06.657 "ffdhe4096", 00:32:06.657 "ffdhe6144", 00:32:06.657 "ffdhe8192" 00:32:06.657 ] 00:32:06.657 } 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "method": "bdev_nvme_attach_controller", 00:32:06.657 "params": { 00:32:06.657 "name": "nvme0", 00:32:06.657 "trtype": "TCP", 00:32:06.657 "adrfam": "IPv4", 00:32:06.657 "traddr": "127.0.0.1", 00:32:06.657 "trsvcid": "4420", 00:32:06.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.657 "prchk_reftag": false, 00:32:06.657 "prchk_guard": false, 00:32:06.657 "ctrlr_loss_timeout_sec": 0, 00:32:06.657 "reconnect_delay_sec": 0, 00:32:06.657 "fast_io_fail_timeout_sec": 0, 00:32:06.657 "psk": "key0", 00:32:06.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:06.657 "hdgst": false, 00:32:06.657 "ddgst": false 00:32:06.657 } 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "method": "bdev_nvme_set_hotplug", 00:32:06.657 "params": { 00:32:06.657 "period_us": 100000, 00:32:06.657 "enable": false 00:32:06.657 } 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "method": "bdev_wait_for_examine" 00:32:06.657 } 00:32:06.657 ] 00:32:06.657 }, 00:32:06.657 { 00:32:06.657 "subsystem": "nbd", 00:32:06.657 "config": [] 00:32:06.657 } 00:32:06.657 ] 00:32:06.657 }' 00:32:06.657 [2024-06-10 10:57:30.756217] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:32:06.657 [2024-06-10 10:57:30.756277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072187 ] 00:32:06.657 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.657 [2024-06-10 10:57:30.831347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.657 [2024-06-10 10:57:30.885140] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.918 [2024-06-10 10:57:31.026917] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:07.490 10:57:31 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:07.490 10:57:31 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:32:07.490 10:57:31 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:07.490 10:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.490 10:57:31 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:07.490 10:57:31 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:07.490 10:57:31 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:07.490 10:57:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:07.490 10:57:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:07.490 10:57:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:07.490 10:57:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:07.490 10:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.751 10:57:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:07.751 10:57:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:07.751 10:57:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:07.751 10:57:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:07.751 10:57:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:07.751 10:57:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:07.751 10:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:07.751 10:57:31 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:07.751 10:57:31 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:07.751 10:57:31 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:07.751 10:57:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:08.013 10:57:32 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:08.013 10:57:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:08.013 10:57:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.te0HYYIIt8 /tmp/tmp.e5BdJaNplD 00:32:08.013 10:57:32 keyring_file -- keyring/file.sh@20 -- # killprocess 1072187 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1072187 ']' 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1072187 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1072187 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1072187' 00:32:08.013 killing process with pid 1072187 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@968 -- # kill 1072187 00:32:08.013 Received shutdown signal, test time was about 1.000000 seconds 00:32:08.013 00:32:08.013 Latency(us) 00:32:08.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.013 =================================================================================================================== 00:32:08.013 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:08.013 10:57:32 keyring_file -- common/autotest_common.sh@973 -- # wait 1072187 00:32:08.274 10:57:32 keyring_file -- keyring/file.sh@21 -- # killprocess 1070383 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 1070383 ']' 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@953 -- # kill -0 1070383 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@954 -- # uname 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1070383 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1070383' 00:32:08.274 killing process with pid 1070383 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@968 -- # kill 1070383 00:32:08.274 [2024-06-10 10:57:32.376064] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:08.274 [2024-06-10 10:57:32.376102] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:08.274 10:57:32 keyring_file -- common/autotest_common.sh@973 -- # wait 1070383 00:32:08.535 00:32:08.535 real 0m10.986s 00:32:08.535 user 0m25.699s 00:32:08.535 sys 0m2.657s 00:32:08.535 10:57:32 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:08.535 10:57:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:08.535 ************************************ 00:32:08.535 END TEST keyring_file 00:32:08.535 ************************************ 00:32:08.535 10:57:32 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:08.535 10:57:32 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:08.535 10:57:32 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:32:08.535 10:57:32 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:08.535 10:57:32 -- common/autotest_common.sh@10 -- # set +x 00:32:08.535 ************************************ 00:32:08.535 START TEST keyring_linux 00:32:08.535 ************************************ 00:32:08.535 10:57:32 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:08.535 * Looking for test storage... 00:32:08.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:08.535 10:57:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:08.535 10:57:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:08.535 10:57:32 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:08.535 10:57:32 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:08.535 10:57:32 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:08.535 10:57:32 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:08.535 10:57:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.536 10:57:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.536 10:57:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.536 10:57:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:08.536 10:57:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:08.536 10:57:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:08.536 10:57:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:08.536 10:57:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:08.536 10:57:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:08.536 10:57:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:08.536 10:57:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:08.536 10:57:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:08.536 10:57:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:08.536 10:57:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:08.536 10:57:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:08.536 10:57:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:08.536 10:57:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:08.536 10:57:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:08.536 10:57:32 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:08.797 /tmp/:spdk-test:key0 00:32:08.797 10:57:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:08.797 10:57:32 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:08.797 10:57:32 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:08.797 10:57:32 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:08.797 10:57:32 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:08.797 10:57:32 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:08.797 10:57:32 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:08.797 10:57:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:08.797 /tmp/:spdk-test:key1 00:32:08.797 10:57:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1072719 00:32:08.797 10:57:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1072719 00:32:08.797 10:57:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:08.797 10:57:32 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1072719 ']' 00:32:08.797 10:57:32 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.797 10:57:32 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:08.797 10:57:32 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.797 10:57:32 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:08.797 10:57:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:08.797 [2024-06-10 10:57:32.952755] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:32:08.797 [2024-06-10 10:57:32.952830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072719 ] 00:32:08.797 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.797 [2024-06-10 10:57:33.016856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.058 [2024-06-10 10:57:33.093665] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:09.629 10:57:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:09.629 [2024-06-10 10:57:33.722053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.629 null0 00:32:09.629 [2024-06-10 10:57:33.754088] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:09.629 [2024-06-10 10:57:33.754134] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:09.629 [2024-06-10 10:57:33.754514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.629 10:57:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:09.629 397119163 00:32:09.629 10:57:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:09.629 907387574 00:32:09.629 10:57:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1072947 00:32:09.629 10:57:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1072947 /var/tmp/bperf.sock 00:32:09.629 10:57:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 1072947 ']' 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:09.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:32:09.629 10:57:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:09.629 [2024-06-10 10:57:33.829809] Starting SPDK v24.09-pre git sha1 bab0baf30 / DPDK 24.03.0 initialization... 00:32:09.629 [2024-06-10 10:57:33.829855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1072947 ] 00:32:09.629 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.629 [2024-06-10 10:57:33.904517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.889 [2024-06-10 10:57:33.958670] reactor.c: 943:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.460 10:57:34 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:32:10.460 10:57:34 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:32:10.460 10:57:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:10.460 10:57:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:10.460 10:57:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:10.460 10:57:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:10.721 10:57:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:10.721 10:57:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:10.982 [2024-06-10 10:57:35.073772] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:10.982 nvme0n1 00:32:10.982 10:57:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:10.982 10:57:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:10.982 10:57:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:10.982 10:57:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:10.982 10:57:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:10.982 10:57:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:11.243 10:57:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:11.243 10:57:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:11.243 10:57:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@25 -- # sn=397119163 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 397119163 == \3\9\7\1\1\9\1\6\3 ]] 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 397119163 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:11.243 10:57:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:11.503 Running I/O for 1 seconds... 00:32:12.444 00:32:12.444 Latency(us) 00:32:12.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.444 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:12.444 nvme0n1 : 1.01 9460.50 36.96 0.00 0.00 13431.22 8738.13 18786.99 00:32:12.444 =================================================================================================================== 00:32:12.444 Total : 9460.50 36.96 0.00 0.00 13431.22 8738.13 18786.99 00:32:12.444 0 00:32:12.444 10:57:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:12.444 10:57:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:12.705 10:57:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:12.705 10:57:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:12.705 10:57:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:12.705 10:57:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:12.705 10:57:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:12.705 10:57:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.705 10:57:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:12.705 10:57:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:12.705 10:57:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:12.705 10:57:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:12.705 10:57:36 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:32:12.705 10:57:36 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:12.705 10:57:36 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:12.705 10:57:36 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:12.705 10:57:36 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:12.705 10:57:36 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:12.705 10:57:36 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:12.705 10:57:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:12.966 [2024-06-10 10:57:37.077476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:12.966 [2024-06-10 10:57:37.078186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2152480 (107): Transport endpoint is not connected 00:32:12.966 [2024-06-10 10:57:37.079182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2152480 (9): Bad file descriptor 00:32:12.966 [2024-06-10 10:57:37.080184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:12.966 [2024-06-10 10:57:37.080190] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:12.966 [2024-06-10 10:57:37.080195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:12.966 request: 00:32:12.966 { 00:32:12.966 "name": "nvme0", 00:32:12.966 "trtype": "tcp", 00:32:12.966 "traddr": "127.0.0.1", 00:32:12.966 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:12.966 "adrfam": "ipv4", 00:32:12.966 "trsvcid": "4420", 00:32:12.966 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:12.966 "psk": ":spdk-test:key1", 00:32:12.966 "method": "bdev_nvme_attach_controller", 00:32:12.966 "req_id": 1 00:32:12.966 } 00:32:12.966 Got JSON-RPC error response 00:32:12.966 response: 00:32:12.966 { 00:32:12.966 "code": -5, 00:32:12.966 "message": "Input/output error" 00:32:12.966 } 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@33 -- # sn=397119163 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 397119163 00:32:12.966 1 links removed 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@33 -- # sn=907387574 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 907387574 00:32:12.966 1 links removed 00:32:12.966 10:57:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1072947 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1072947 ']' 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1072947 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1072947 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1072947' 00:32:12.966 killing process with pid 1072947 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@968 -- # kill 1072947 00:32:12.966 Received shutdown signal, test time was about 1.000000 seconds 00:32:12.966 00:32:12.966 Latency(us) 00:32:12.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.966 =================================================================================================================== 00:32:12.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:12.966 10:57:37 keyring_linux -- common/autotest_common.sh@973 -- # wait 1072947 00:32:13.227 10:57:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1072719 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 1072719 ']' 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 1072719 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1072719 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1072719' 00:32:13.227 killing process with pid 1072719 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@968 -- # kill 1072719 00:32:13.227 [2024-06-10 10:57:37.331361] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:13.227 10:57:37 keyring_linux -- common/autotest_common.sh@973 -- # wait 1072719 00:32:13.488 00:32:13.488 real 0m4.885s 00:32:13.488 user 0m8.347s 00:32:13.488 sys 0m1.463s 00:32:13.488 10:57:37 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:13.488 10:57:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:13.488 ************************************ 00:32:13.488 END TEST keyring_linux 00:32:13.488 ************************************ 00:32:13.488 10:57:37 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:13.488 10:57:37 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:13.488 10:57:37 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:13.488 10:57:37 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:13.488 10:57:37 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:13.488 10:57:37 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:13.488 10:57:37 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:13.488 10:57:37 -- common/autotest_common.sh@723 -- # xtrace_disable 00:32:13.488 10:57:37 -- common/autotest_common.sh@10 -- # set +x 00:32:13.488 10:57:37 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:13.488 10:57:37 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:32:13.488 10:57:37 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:32:13.488 10:57:37 -- common/autotest_common.sh@10 -- # set +x 00:32:21.631 INFO: APP EXITING 00:32:21.631 INFO: killing all VMs 00:32:21.631 INFO: killing vhost app 00:32:21.631 INFO: EXIT DONE 00:32:24.932 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:24.932 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:24.932 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:28.236 Cleaning 00:32:28.236 Removing: /var/run/dpdk/spdk0/config 00:32:28.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:28.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:28.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:28.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:28.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:28.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:28.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:28.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:28.236 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:28.236 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:28.236 Removing: /var/run/dpdk/spdk1/config 00:32:28.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:28.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:28.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:28.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:28.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:28.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:28.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:28.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:28.236 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:28.236 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:28.236 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:28.236 Removing: /var/run/dpdk/spdk2/config 00:32:28.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:28.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:28.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:28.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:28.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:28.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:28.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:28.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:28.236 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:28.236 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:28.236 Removing: /var/run/dpdk/spdk3/config 00:32:28.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:28.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:28.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:28.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:28.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:28.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:28.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:28.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:28.236 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:28.496 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:28.496 Removing: /var/run/dpdk/spdk4/config 00:32:28.496 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:28.496 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:28.496 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:28.496 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:28.496 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:28.496 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:28.496 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:28.496 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:28.496 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:28.496 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:28.496 Removing: /dev/shm/bdev_svc_trace.1 00:32:28.496 Removing: /dev/shm/nvmf_trace.0 00:32:28.496 Removing: /dev/shm/spdk_tgt_trace.pid613231 00:32:28.496 Removing: /var/run/dpdk/spdk0 00:32:28.496 Removing: /var/run/dpdk/spdk1 00:32:28.496 Removing: /var/run/dpdk/spdk2 00:32:28.496 Removing: /var/run/dpdk/spdk3 00:32:28.496 Removing: /var/run/dpdk/spdk4 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1005422 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1006190 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1006957 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1007694 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1008660 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1009438 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1010130 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1010809 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1016025 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1016357 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1023914 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1024287 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1026796 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1034003 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1034083 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1040213 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1042444 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1044825 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1046146 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1048664 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1049885 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1059928 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1060578 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1061243 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1064010 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1064663 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1065332 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1070383 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1070698 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1072187 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1072719 00:32:28.496 Removing: /var/run/dpdk/spdk_pid1072947 00:32:28.496 Removing: /var/run/dpdk/spdk_pid611622 00:32:28.496 Removing: /var/run/dpdk/spdk_pid613231 00:32:28.496 Removing: /var/run/dpdk/spdk_pid613942 00:32:28.496 Removing: /var/run/dpdk/spdk_pid614978 00:32:28.496 Removing: /var/run/dpdk/spdk_pid615316 00:32:28.496 Removing: /var/run/dpdk/spdk_pid616383 00:32:28.496 Removing: /var/run/dpdk/spdk_pid616710 00:32:28.496 Removing: /var/run/dpdk/spdk_pid616840 00:32:28.496 Removing: /var/run/dpdk/spdk_pid617966 00:32:28.497 Removing: /var/run/dpdk/spdk_pid618592 00:32:28.758 Removing: /var/run/dpdk/spdk_pid618863 00:32:28.758 Removing: /var/run/dpdk/spdk_pid619196 00:32:28.758 Removing: /var/run/dpdk/spdk_pid619604 00:32:28.758 Removing: /var/run/dpdk/spdk_pid619992 00:32:28.758 Removing: /var/run/dpdk/spdk_pid620344 00:32:28.758 Removing: /var/run/dpdk/spdk_pid620536 00:32:28.758 Removing: /var/run/dpdk/spdk_pid620779 00:32:28.758 Removing: /var/run/dpdk/spdk_pid622147 00:32:28.758 Removing: /var/run/dpdk/spdk_pid625403 00:32:28.758 Removing: /var/run/dpdk/spdk_pid625773 00:32:28.758 Removing: /var/run/dpdk/spdk_pid626136 00:32:28.758 Removing: /var/run/dpdk/spdk_pid626455 00:32:28.758 Removing: /var/run/dpdk/spdk_pid626843 00:32:28.758 Removing: /var/run/dpdk/spdk_pid626882 00:32:28.758 Removing: /var/run/dpdk/spdk_pid627550 00:32:28.758 Removing: /var/run/dpdk/spdk_pid627568 00:32:28.758 Removing: /var/run/dpdk/spdk_pid627932 00:32:28.758 Removing: /var/run/dpdk/spdk_pid628112 00:32:28.758 Removing: /var/run/dpdk/spdk_pid628303 00:32:28.758 Removing: /var/run/dpdk/spdk_pid628582 00:32:28.758 Removing: /var/run/dpdk/spdk_pid629075 00:32:28.758 Removing: /var/run/dpdk/spdk_pid629323 00:32:28.758 Removing: /var/run/dpdk/spdk_pid629588 00:32:28.758 Removing: /var/run/dpdk/spdk_pid629871 00:32:28.758 Removing: /var/run/dpdk/spdk_pid629936 00:32:28.758 Removing: /var/run/dpdk/spdk_pid630279 00:32:28.758 Removing: /var/run/dpdk/spdk_pid630480 00:32:28.758 Removing: /var/run/dpdk/spdk_pid630689 00:32:28.758 Removing: /var/run/dpdk/spdk_pid631018 00:32:28.758 Removing: /var/run/dpdk/spdk_pid631365 00:32:28.758 Removing: /var/run/dpdk/spdk_pid631720 00:32:28.758 Removing: /var/run/dpdk/spdk_pid631924 00:32:28.758 Removing: /var/run/dpdk/spdk_pid632121 00:32:28.758 Removing: /var/run/dpdk/spdk_pid632459 00:32:28.758 Removing: /var/run/dpdk/spdk_pid632808 00:32:28.758 Removing: /var/run/dpdk/spdk_pid633155 00:32:28.758 Removing: /var/run/dpdk/spdk_pid633361 00:32:28.758 Removing: /var/run/dpdk/spdk_pid633562 00:32:28.758 Removing: /var/run/dpdk/spdk_pid633890 00:32:28.758 Removing: /var/run/dpdk/spdk_pid634248 00:32:28.758 Removing: /var/run/dpdk/spdk_pid634595 00:32:28.758 Removing: /var/run/dpdk/spdk_pid634847 00:32:28.758 Removing: /var/run/dpdk/spdk_pid635044 00:32:28.758 Removing: /var/run/dpdk/spdk_pid635342 00:32:28.758 Removing: /var/run/dpdk/spdk_pid635695 00:32:28.758 Removing: /var/run/dpdk/spdk_pid636051 00:32:28.758 Removing: /var/run/dpdk/spdk_pid636120 00:32:28.758 Removing: /var/run/dpdk/spdk_pid636529 00:32:28.758 Removing: /var/run/dpdk/spdk_pid641046 00:32:28.758 Removing: /var/run/dpdk/spdk_pid694552 00:32:28.758 Removing: /var/run/dpdk/spdk_pid699771 00:32:28.758 Removing: /var/run/dpdk/spdk_pid711784 00:32:28.758 Removing: /var/run/dpdk/spdk_pid718618 00:32:28.758 Removing: /var/run/dpdk/spdk_pid723644 00:32:28.758 Removing: /var/run/dpdk/spdk_pid724324 00:32:28.758 Removing: /var/run/dpdk/spdk_pid738408 00:32:28.758 Removing: /var/run/dpdk/spdk_pid738461 00:32:28.758 Removing: /var/run/dpdk/spdk_pid739460 00:32:28.758 Removing: /var/run/dpdk/spdk_pid740464 00:32:28.758 Removing: /var/run/dpdk/spdk_pid741472 00:32:28.758 Removing: /var/run/dpdk/spdk_pid742148 00:32:28.758 Removing: /var/run/dpdk/spdk_pid742150 00:32:29.019 Removing: /var/run/dpdk/spdk_pid742485 00:32:29.019 Removing: /var/run/dpdk/spdk_pid742497 00:32:29.019 Removing: /var/run/dpdk/spdk_pid742511 00:32:29.019 Removing: /var/run/dpdk/spdk_pid743559 00:32:29.019 Removing: /var/run/dpdk/spdk_pid744579 00:32:29.019 Removing: /var/run/dpdk/spdk_pid745694 00:32:29.019 Removing: /var/run/dpdk/spdk_pid746335 00:32:29.019 Removing: /var/run/dpdk/spdk_pid746466 00:32:29.019 Removing: /var/run/dpdk/spdk_pid746716 00:32:29.019 Removing: /var/run/dpdk/spdk_pid747967 00:32:29.019 Removing: /var/run/dpdk/spdk_pid749356 00:32:29.019 Removing: /var/run/dpdk/spdk_pid759734 00:32:29.019 Removing: /var/run/dpdk/spdk_pid760544 00:32:29.019 Removing: /var/run/dpdk/spdk_pid765617 00:32:29.019 Removing: /var/run/dpdk/spdk_pid772570 00:32:29.019 Removing: /var/run/dpdk/spdk_pid775651 00:32:29.019 Removing: /var/run/dpdk/spdk_pid787938 00:32:29.019 Removing: /var/run/dpdk/spdk_pid798748 00:32:29.019 Removing: /var/run/dpdk/spdk_pid800806 00:32:29.019 Removing: /var/run/dpdk/spdk_pid802080 00:32:29.019 Removing: /var/run/dpdk/spdk_pid823104 00:32:29.019 Removing: /var/run/dpdk/spdk_pid827528 00:32:29.019 Removing: /var/run/dpdk/spdk_pid858325 00:32:29.019 Removing: /var/run/dpdk/spdk_pid864064 00:32:29.019 Removing: /var/run/dpdk/spdk_pid866062 00:32:29.019 Removing: /var/run/dpdk/spdk_pid868328 00:32:29.019 Removing: /var/run/dpdk/spdk_pid868424 00:32:29.019 Removing: /var/run/dpdk/spdk_pid868767 00:32:29.019 Removing: /var/run/dpdk/spdk_pid869033 00:32:29.019 Removing: /var/run/dpdk/spdk_pid869575 00:32:29.019 Removing: /var/run/dpdk/spdk_pid871837 00:32:29.020 Removing: /var/run/dpdk/spdk_pid872911 00:32:29.020 Removing: /var/run/dpdk/spdk_pid873426 00:32:29.020 Removing: /var/run/dpdk/spdk_pid875996 00:32:29.020 Removing: /var/run/dpdk/spdk_pid876705 00:32:29.020 Removing: /var/run/dpdk/spdk_pid877501 00:32:29.020 Removing: /var/run/dpdk/spdk_pid882533 00:32:29.020 Removing: /var/run/dpdk/spdk_pid894579 00:32:29.020 Removing: /var/run/dpdk/spdk_pid899395 00:32:29.020 Removing: /var/run/dpdk/spdk_pid906832 00:32:29.020 Removing: /var/run/dpdk/spdk_pid908840 00:32:29.020 Removing: /var/run/dpdk/spdk_pid910587 00:32:29.020 Removing: /var/run/dpdk/spdk_pid915807 00:32:29.020 Removing: /var/run/dpdk/spdk_pid920837 00:32:29.020 Removing: /var/run/dpdk/spdk_pid930022 00:32:29.020 Removing: /var/run/dpdk/spdk_pid930024 00:32:29.020 Removing: /var/run/dpdk/spdk_pid935128 00:32:29.020 Removing: /var/run/dpdk/spdk_pid935473 00:32:29.020 Removing: /var/run/dpdk/spdk_pid935582 00:32:29.020 Removing: /var/run/dpdk/spdk_pid936154 00:32:29.020 Removing: /var/run/dpdk/spdk_pid936159 00:32:29.020 Removing: /var/run/dpdk/spdk_pid941598 00:32:29.020 Removing: /var/run/dpdk/spdk_pid942423 00:32:29.020 Removing: /var/run/dpdk/spdk_pid947654 00:32:29.020 Removing: /var/run/dpdk/spdk_pid951008 00:32:29.020 Removing: /var/run/dpdk/spdk_pid957462 00:32:29.020 Removing: /var/run/dpdk/spdk_pid964151 00:32:29.020 Removing: /var/run/dpdk/spdk_pid974555 00:32:29.020 Removing: /var/run/dpdk/spdk_pid983227 00:32:29.020 Removing: /var/run/dpdk/spdk_pid983261 00:32:29.020 Clean 00:32:29.281 10:57:53 -- common/autotest_common.sh@1450 -- # return 0 00:32:29.281 10:57:53 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:29.281 10:57:53 -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:29.281 10:57:53 -- common/autotest_common.sh@10 -- # set +x 00:32:29.281 10:57:53 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:29.281 10:57:53 -- common/autotest_common.sh@729 -- # xtrace_disable 00:32:29.281 10:57:53 -- common/autotest_common.sh@10 -- # set +x 00:32:29.281 10:57:53 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:29.281 10:57:53 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:29.281 10:57:53 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:29.281 10:57:53 -- spdk/autotest.sh@391 -- # hash lcov 00:32:29.281 10:57:53 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:29.281 10:57:53 -- spdk/autotest.sh@393 -- # hostname 00:32:29.281 10:57:53 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:29.542 geninfo: WARNING: invalid characters removed from testname! 00:32:56.126 10:58:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:57.068 10:58:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:59.648 10:58:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:01.605 10:58:25 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:02.986 10:58:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:04.896 10:58:28 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:06.279 10:58:30 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:06.279 10:58:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.279 10:58:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:06.279 10:58:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.279 10:58:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.279 10:58:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.279 10:58:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.279 10:58:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.279 10:58:30 -- paths/export.sh@5 -- $ export PATH 00:33:06.279 10:58:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.279 10:58:30 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:06.279 10:58:30 -- common/autobuild_common.sh@437 -- $ date +%s 00:33:06.279 10:58:30 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718009910.XXXXXX 00:33:06.279 10:58:30 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718009910.hgA3bY 00:33:06.279 10:58:30 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:33:06.279 10:58:30 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:33:06.279 10:58:30 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:06.279 10:58:30 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:06.279 10:58:30 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:06.279 10:58:30 -- common/autobuild_common.sh@453 -- $ get_config_params 00:33:06.279 10:58:30 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:06.279 10:58:30 -- common/autotest_common.sh@10 -- $ set +x 00:33:06.279 10:58:30 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:06.279 10:58:30 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:33:06.279 10:58:30 -- pm/common@17 -- $ local monitor 00:33:06.279 10:58:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:06.279 10:58:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:06.279 10:58:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:06.279 10:58:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:06.279 10:58:30 -- pm/common@21 -- $ date +%s 00:33:06.279 10:58:30 -- pm/common@25 -- $ sleep 1 00:33:06.279 10:58:30 -- pm/common@21 -- $ date +%s 00:33:06.279 10:58:30 -- pm/common@21 -- $ date +%s 00:33:06.279 10:58:30 -- pm/common@21 -- $ date +%s 00:33:06.279 10:58:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718009910 00:33:06.279 10:58:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718009910 00:33:06.279 10:58:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718009910 00:33:06.279 10:58:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718009910 00:33:06.279 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718009910_collect-vmstat.pm.log 00:33:06.279 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718009910_collect-cpu-load.pm.log 00:33:06.279 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718009910_collect-cpu-temp.pm.log 00:33:06.279 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718009910_collect-bmc-pm.bmc.pm.log 00:33:07.220 10:58:31 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:33:07.220 10:58:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:07.221 10:58:31 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:07.221 10:58:31 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:07.221 10:58:31 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:07.221 10:58:31 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:07.221 10:58:31 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:07.221 10:58:31 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:07.221 10:58:31 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:07.221 10:58:31 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:07.221 10:58:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:07.221 10:58:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:07.221 10:58:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:07.221 10:58:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:07.221 10:58:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:07.221 10:58:31 -- pm/common@44 -- $ pid=1085394 00:33:07.221 10:58:31 -- pm/common@50 -- $ kill -TERM 1085394 00:33:07.221 10:58:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:07.221 10:58:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:07.221 10:58:31 -- pm/common@44 -- $ pid=1085395 00:33:07.221 10:58:31 -- pm/common@50 -- $ kill -TERM 1085395 00:33:07.221 10:58:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:07.221 10:58:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:07.221 10:58:31 -- pm/common@44 -- $ pid=1085397 00:33:07.221 10:58:31 -- pm/common@50 -- $ kill -TERM 1085397 00:33:07.221 10:58:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:07.221 10:58:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:07.221 10:58:31 -- pm/common@44 -- $ pid=1085420 00:33:07.221 10:58:31 -- pm/common@50 -- $ sudo -E kill -TERM 1085420 00:33:07.221 + [[ -n 491860 ]] 00:33:07.221 + sudo kill 491860 00:33:07.231 [Pipeline] } 00:33:07.247 [Pipeline] // stage 00:33:07.252 [Pipeline] } 00:33:07.269 [Pipeline] // timeout 00:33:07.273 [Pipeline] } 00:33:07.289 [Pipeline] // catchError 00:33:07.293 [Pipeline] } 00:33:07.309 [Pipeline] // wrap 00:33:07.314 [Pipeline] } 00:33:07.328 [Pipeline] // catchError 00:33:07.336 [Pipeline] stage 00:33:07.338 [Pipeline] { (Epilogue) 00:33:07.350 [Pipeline] catchError 00:33:07.352 [Pipeline] { 00:33:07.364 [Pipeline] echo 00:33:07.365 Cleanup processes 00:33:07.371 [Pipeline] sh 00:33:07.657 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:07.657 1085502 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:07.657 1085943 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:07.671 [Pipeline] sh 00:33:07.961 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:07.961 ++ grep -v 'sudo pgrep' 00:33:07.961 ++ awk '{print $1}' 00:33:07.961 + sudo kill -9 1085502 00:33:07.971 [Pipeline] sh 00:33:08.252 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:20.482 [Pipeline] sh 00:33:20.768 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:20.769 Artifacts sizes are good 00:33:20.783 [Pipeline] archiveArtifacts 00:33:20.790 Archiving artifacts 00:33:20.982 [Pipeline] sh 00:33:21.266 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:21.309 [Pipeline] cleanWs 00:33:21.339 [WS-CLEANUP] Deleting project workspace... 00:33:21.339 [WS-CLEANUP] Deferred wipeout is used... 00:33:21.346 [WS-CLEANUP] done 00:33:21.348 [Pipeline] } 00:33:21.367 [Pipeline] // catchError 00:33:21.379 [Pipeline] sh 00:33:21.663 + logger -p user.info -t JENKINS-CI 00:33:21.673 [Pipeline] } 00:33:21.689 [Pipeline] // stage 00:33:21.695 [Pipeline] } 00:33:21.711 [Pipeline] // node 00:33:21.717 [Pipeline] End of Pipeline 00:33:21.749 Finished: SUCCESS